what people in the media are calling 'hallucination' in large language models is really imagination and creativity. it's inherent in the models and it's why they have been able to unlock so many amazing cognitive capabilities where previous efforts have failed
That's like saying that a simple linear regression predicting the wrong (x,y) value is just "being creative". The LLM is lacking in sufficient complexity to make the correct word prediction. It's not being creative, it is just flat out wrong.
Sure your linear regression example is great, and it's an example of how 'hallucination' in models can be good. Imagine if the only prediction algorithms had to fit every point in noisy data, it would look like either some overfitted thing like an ugly high order polynomial that goes through every point or just a database of every x y pair in the training set. Neither of those ones hallucinate if you ask the data from the training set. But for most purposes they wouldn't be as useful as a fitted linear regression, which 'hallucinates' y values even if you give it exact x from the training set. Of course if you just want to memorize and repeat back the training set then just a database is the best way!
So by your definition of creativity, the LLM is always being creative because it's always using probability to determine the next word (it's not just spitting out facts from a database). Hallucination is just when that prediction, or creativity as you call it, is wrong.