Conversations around AI are almost always unclear and poorly framed driven by a profit-hungry hype train.
We're still talking about AI as though the goal was never actually to have artificial intelligence. LLMs are impressive for what they are, but they definitely aren't intelligent. OpenTextPrediction just doesn't have a nice ring to it and definitely wouldn't be valued at billions of dollars.
Whenever I see somebody type super intelligence or AGI, I remember that AI was supposed to be solved by a summer project at Dartmouth in 1956 and I chuckle a bit.
Well that's a great question, and an even more basic complaint I have with the AI research space. They have yet to bother coming up with a clear way to define or recognize intelligence or consciousness.
Those are interesting starting points. I don't know if I'd say its that simply, but Tue direction seems totally reasonable.
The ability to solve problems is a particularly interesting one. To me there's a difference between brute forcing or pattern recognition and truly solving a problem (I don't have a great definition for that!). If that's the case, how do we really recognize which one an LLM or potential AI is doing?
It'd be a huge help if AI researchers put more focus on the interoperability problem before developing systems that could reasonably emerge intelligence.
Are we still talking about AIs as anything other than tools that will enhance everyone’s abilities instead of replace people at the bottom?
Beat elite programmers? Have you seen what hundreds of billions of dollars can get you and the models can’t even solve a logic puzzle?
Say those AIs exist. Who is going to prompt those AIs? Steve, the CEO, who can’t open his PDFs or Bill, the CTO, who has only 24 hours in the day.