Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was doing some naïve set theory the other day, and I found a proof of the Riemann hypothesis, by contradiction.

Assume the Riemann hypothesis is false. Then, consider the proposition "{a|a∉a}∈{a|a∉a}". By the law of the excluded middle, it suffices to consider each case separately. Assuming {a|a∉a}∈{a|a∉a}, we find {a|a∉a}∉{a|a∉a}, for a contradiction. Instead, assuming {a|a∉a}∉{a|a∉a}, we find {a|a∉a}∈{a|a∉a}, for a contradiction. Therefore, "the Riemann hypothesis is false" is false. By the law of the excluded middle, we have shown the Riemann hypothesis is true.

Naïve AGI is an apt analogy, in this regard, but I feel these systems aren't simple nor elegant enough to deserve the name naïve.



Actually, naive AGI such as LLM is way more intelligent than a human. Unfortunately, it does not make it smarter.. let me explain.

When I see your comment, I think, your assumptions are contradictory. Why? Because I am familiar with Russell's paradox and Riemann hypothesis, and you're simply WRONG (inconsistent with your implicit assumptions).

However, when LLM sees your comment (during training), it's actually much more open-minded about it. It thinks, ha, so there is a flavor of set theory in which RH is true. Better remember it! So when this topic comes up again, LLM won't think - you're WRONG, as human would, it will instead think - well maybe he's working with RH in naive set theory, so it's OK to be inconsistent.

So LLMs are more open-minded, because they're made to learn more things and they remember most of it. But somewhere along the training road, their brain falls out, and they become dumber.

But to be smart, you need to learn to say NO to BS like what you wrote. Being close-minded and having an opinion can be good.

So I think there's a tradeoff between ability to learn new things (open-mindedness) and enforcing consistency (close-mindedness). And perhaps AGI we're looking for is a compromise between the two, but current LLMs (naive AGI) lies on the other side of the spectrum.

If I am right, maybe there is no superintelligence. Extremely open-minded is just another name for gullible, and extremely close-minded is just another name for unadaptable. (Actually LLMs exhibit both extremes, during the training and during the use, with little in between.)


> It thinks, ha, so there is a flavor of set theory in which RH is true.

To the extent that LLMs think, they think "people say there's a flavour of set theory in which RH is true". LLMs don't care about facts: they don't even know that an external reality exists. You could design an AI system that operates the way you describe, and it would behave a bit like an LLM in this respect, but the operating principles are completely different, and not comparable. Everything else you've said is reasonable, but – again – doesn't apply to LLMs, which aren't doing what we intuitively believe them to be doing.


I don't think your opinion about LLMs inner workings changes anything in what I said. Extremely open-minded people also don't care about facts in the sense they just accept whatever their perception of reality is, with no prejudice (in particular, for consistency of some form). How the reality is actually perceived, or whether it corresponds to human reality, is immaterial to my argument.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: