So what. 90% (or more) of humans aren't making any sort of breakthrough in any discipline, either. 99.9999999999% of human speech/writing isn't producing "breakthroughs" either, it's just a way to communicate.
>It's just ironic how human-like the flaws of the system are. (Hallucinations that are asserting untrue facts, just because they are plausible from a pattern matching POV)
The LLM is not "hallucinating". It's just operating as it was designed to do, which often produces results that do not make any sense. I have actually hallucinated, and some of those experiences were profoundly insightful, quite the opposite of what an LLM does when it "hallucinates".
You can call anything a "breakthrough" if you aren't aware of prior art. And LLMs are "trained" on nothing but prior art. If an LLM does make a "breakthrough", then it's because the "breakthrough" was already in the training data. I have no doubt many of these "breakthroughs" will be followed years later by someone finding the actual human-based research that the LLM consumed in its training data, rendering the "breakthrough" not quite as exciting.
So what. 90% (or more) of humans aren't making any sort of breakthrough in any discipline, either. 99.9999999999% of human speech/writing isn't producing "breakthroughs" either, it's just a way to communicate.
>It's just ironic how human-like the flaws of the system are. (Hallucinations that are asserting untrue facts, just because they are plausible from a pattern matching POV)
The LLM is not "hallucinating". It's just operating as it was designed to do, which often produces results that do not make any sense. I have actually hallucinated, and some of those experiences were profoundly insightful, quite the opposite of what an LLM does when it "hallucinates".
You can call anything a "breakthrough" if you aren't aware of prior art. And LLMs are "trained" on nothing but prior art. If an LLM does make a "breakthrough", then it's because the "breakthrough" was already in the training data. I have no doubt many of these "breakthroughs" will be followed years later by someone finding the actual human-based research that the LLM consumed in its training data, rendering the "breakthrough" not quite as exciting.