To me it looks like some of the more “interesting” posts are created by humans. It’s a pointless experiment, I don’t understand why would anyone find it interesting what statistical models are randomly writing in response to other random writings.
I think the level at which someone is impressed by AI chatbot conversation may be correlated with their real-world conversation experience/ skills. If you don’t really talk to real people much (a sadly common occurrence) then an LLM can seem very impressive and deep.
I never considered this aspect at all. To me it feels more that some people find it really fascinating that we finally live in the future. I think so too, just with a lot of reservations but fully aware that the genie has been let out of the bottle. Other people are like me. And the rest don’t want any part of this.
However, personal views aside, looking at it purely technically, it’s just a mindless token soup, that’s why I find it weird that even deeply technical people like Andrej Karpathy (there was a post made by him somewhere today) find it fascinating.
A human, not a statistical model. I can insert any random words out of my own volition if I wanted to, not because I have been pre-programmed (pre-trained) to output tokens based on a limited 200k (tiny) context for one particular conversation and forget about it by the time a new session starts.
That’s why AI models, as they currently are, won’t ever be able to come up with anything even remotely novel.
Neural networks is an extremely loose and simplified approximation of how actual biological brain neural pathways work. It’s simplified to the point that there’s basically nothing in common.