> Suppose you study this giant stack of Thai text for years in isolation. After all this study, you're good enough that given a few written Thai words, you can write sequences of words that are likely to follow, given what you know of these patterns. You can fill in blanks. But should anyone guess that you "know" what you're saying? Nothing has ever indicated to you what any of these words _mean_. If you give back a sequence of words, which a Thai speakers understands to be expressing an opinion about monetary policy, because you read several similar sequences in the pile, is that even your opinion?
Note that this isn't just an exotic thought experiment. People like this already exist; the condition is known as "Wernicke's aphasia". People displaying this condition can speak normally. They can't understand things; they are missing a normal mental mapping from words to meanings.
> People displaying this condition can speak normally.
Not really? They can speak in grammatically correct sentences, with connected speech, but what they say can be nonsense. I wouldn't call that normal. I think LLMs show that, solely with access to text, it's possible to produce a good enough model that what you produce is not only not nonsense, but so good that academic psychologists suggest it may have a theory of mind.
> However, often what they say doesn’t make a lot of sense or they pepper their sentences with non-existent or irrelevant words.
Note that this isn't just an exotic thought experiment. People like this already exist; the condition is known as "Wernicke's aphasia". People displaying this condition can speak normally. They can't understand things; they are missing a normal mental mapping from words to meanings.