Humans don't really generate text as a series of words. If you've ever known what you wanted to say but not been able to remember the word you can see this in practice. Although the analogy is probably a helpful one, LLMs are basically doing the word remembering bit of language, without any of the thought behind it.
How do you generate your text? Do you write the middle of the sentence first, come back to the start then finish it? Or do you have a special keyboard where you drop sentences as fully formed input?
As systems humans and LLMs behave in observably similar ways. You feed in some sort of prompt+context, there is a little bit of thinking done, a response is developed by some wildly black-box method, and then a series of words are generated as output. The major difference is that the black boxes presumably work differently but since they are both black boxes that doesn't matter much for which will do a better job at root cause analysis.
People seem to go a bit crazy on this topic at the idea that complex systems can be built from primitives. Just because the LLM primitives are simple doesn't mean the overall model isn't capable of complex responses.
Do you write the middle of the sentence first, come back to the start then finish it?
Am I the only one that does this?
I'll have a central point I want to make that I jot down and then come back and fill in the text around it -- both before and after.
When writing long form, I'll block out whole sections and build up an outline before starting to fill it in. This approach allows better distribution on "points of interest" (and was how I was taught to write in the 90's).