Hacker Newsnew | past | comments | ask | show | jobs | submit | brianush1's commentslogin

What would you title this article to make it less "clickbait"? This is one of the least clickbait headlines I've seen, it's literally just describing what's in the article.


> Realistically, what are the odds that our not very large or clever brains really do have the potential to understand the entire universe

My belief on this is not entirely rational, of course, but it seems to me that there's probably a sort of Turing-completeness for intelligence/understanding, where as soon as a mind starts being able to understand abstraction, given enough time and resources, it can probably understand the entire universe.

It would also be presumptuous to say that brainfuck is equally powerful to every other programming language that exists, and yet we know it to be true. The fundamental reason we can prove that Turing-complete languages are equivalent to each other is that we can build the same abstractions in both, so intuitively it feels like a similar principle holds for human intelligence.


apparently the pegs only have to touch the holes, they don't have to line up perfectly


ahhhhhhh thanks


I'm not seeing anything in that graph that implies that o1 ever fails on "what is 6*1?" The chart is graphing the number of digits on each axis; it fails on "what is (some 6 digit number) * (some 1 digit number)"



Your initial translation into JavaScript is a representation of the statement "All my things are green hats", which is not the same as "All my hats are green."

The statement "All my hats are green" would map to

    things.every(thing => thing.type != 'hat' || thing.color == 'green')
i.e., everything the person owns must either be green or, if it isn't green, it must not be a hat since all hats are green.

The negated form would then be

    things.some(thing => thing.type == 'hat' && thing.color != 'green')
i.e., there are some hats that are not green.


pretty sure you have a typo, should be "If not, consider (sqrt(2) ^ sqrt(2)) ^ sqrt(2)."


it works with textbooks too though


Isn't doing the exercises a lot more efficient?


Why does there need to be an "I" that uses language to transmit information? Language itself encodes information. I can read a piece of text and gain something from it. Where the text came from is irrelevant.


> Language itself encodes information.

Which it does in a lossy manner. Information is independent from language. The more complex the information, the more language fails. Which is why there is so many mediums for communication. Language has three main components: the symbols, the grammar, and the dictionary. The first refers to the tokens of our vocabulary, the second to the rules to arrange these tokens, and the third describes the relation of the tokens to the things they represent.

The relation between the three is interdependent. We name new things we encounter, creating entry in the dictionary, we figure the rules that governs these things, and the relation to other things encountered previously. And thus, we can issue statements. We can also name these statements and it continues recursively. But each of us possess its own copy of these stuff with its own variations. What you gain from what I said may be different from what I intended to transmit. And what I intended to transmit may be a poor description of the things itself. So flawed interpretation, flawed description, and flawed transmission result in flawed understanding. To correct it, you need to be in presence of the thing itself. Missing that, you strive to establish the tokens, the grammar, and the dictionary of the person that have written the text.

In LLMs, the dictionary is missing. The token "snow" has no relation to the thing we call snow. But because it's often placed near other tokens like "ice", "freeze", etc,... Then a rule emerges (embedding?) that these things must be related to each other. In what way it does not know. But if we apply the data collected in a statistical manner, we can arrange these tokens and the result will probably be correct. But there's still a non-zero chance that the generated statement is meaningless as there's no foundation rule that drives it. So there's only tokens. And rules derived from analyzing texts (which lack the foundation rules that comes from being in the real world).

All of these to say the act of learning is either observing the real world and figure how it works. Or read from someone that has done the observing and has written his interpretation, then go outside and confirm it. Barring that, we reconstruct the life of this person so that we can correct the imperfection of languages. With LLMs, there's no way to correct as the statement themselves are not truthful. they can just be accidentally be right.


I think the core insight OP may be looking for is that your dictionary is just an illusion - that concepts being related to other concepts to various degree is all that there is. The meaning of a concept is defined entirely by other concepts that are close to it in something like a latent space of a language model.

Of course humans get to also connect concepts with inputs from other senses, such as sight, touch, smell or sound. This provides some grounding. It is important for learning to communicate (and to have something to communicate about), and was important for humans when first developing languages - but they're not strictly necessary to learn the meanings. All this empirical grounding is already implicitly encoded in human communication, so it should be possible for an LLM to actually understand what e.g. "green" means, despite having never seen color. Case in point: blind people are able to do this, so the information is there.


Blind people are no more able to understand* (as qualia) "green" than a sighted human is able to understand* gamma rays. The confusion is between working with abstract concepts vs an actual experience. A picture of bread provides no physical nourishment beyond the fiber in the paper it is printed on.

In an abstract space (e.g. word vectors, poetry) green could have (many potential) meanings. But none of them are even in the same universe as the actual experience (qualia) of seeing something green. This would be a category mistake between qualia-space and concept-space

understand in the experiential, qualia sense.

https://en.wikipedia.org/wiki/Qualia

https://en.wikipedia.org/wiki/Category_mistake


I don't need the qualia of gamma rays to understand gamma rays, nor to be understood in turn when I say that "I understand gamma rays".

Conversely, I can (and do) have qualia that I do not understand.

The concept of qualia is, I think, pre-paradigmatic — we know of our own, but can't turn that experience into a testable phenomena in the world outside our heads. We don't have any way to know if any given AI does or doesn't have it, nor how that might change as the models go from text to multimodal, or if we give them (real or simulated) embodiment.


>that concepts being related to other concepts to various degree is all that there is

This is the view that Fodor termed "inferential role semantics". https://ruccs.rutgers.edu/images/personal-ernest-lepore/WhyM...


> Put the pen between your index and middle fingers.

I'm genuinely curious, how else could you hold a pen?


Doing it the "standard" way (at least the way we were taught in school) the pen rests between your thumb and index finger. In both cases, you "pinch" the end of the pen with thumb, index finger, and middle finger. But holding it the standard way, the length of the pen is largely unsupported (and so support has to be provided by "pinching" harder), whereas holding the pen between the middle and index finger provides passive support. From your question, did you always hold a pen between middle and index finger?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: