Am I the only one who feels that Claude Code is what they would have imagined basic AGI to be like 10 years ago?
It can plan and take actions towards arbitrary goals in a wide variety of mostly text-based domains. It can maintain basic "memory" in text files. It's not smart enough to work on a long time horizon yet, it's not embodied, and it has big gaps in understanding.
But this is basically what I would have expected v1 to look like.
> Am I the only one who feels that Claude Code is what they would have imagined basic AGI to be like 10 years ago?
That wouldn't have occurred to me, to be honest. To me, AGI is Data from Star Trek. Or at the very least, Arnold Schwarzenegger's character from The Terminator.
I'm not sure that I'd make sentience a hard requirement for AGI, but I think my general mental fantasy of AGI even includes sentience.
Claude Code is amazing, but I would never mistake it for AGI.
I would categorize sentient AGI as artificial consciousness[1], but I don't see an obvious reason AGI inherently must be conscious or sentient. (In terms of near-term economic value, non-sentient AGI seems like a more useful invention.)
For me, AGI is an AI that I could assign an arbitrarily complex project, and given sufficient compute and permissions, it would succeed at the task as reliably as a competent C-suite human executive. For example, it could accept and execute on instructions to acquire real estate that matches certain requirements, request approvals from the purchasing and legal departments as required, handle government communication and filings as required, construct a widget factory on the property using a fleet of robots, and operate the factory on an ongoing basis while ensuring reliable widget deliveries to distribution partners. Current agentic coding certainly feels like magic, but it's still not that.
"Consciousness" and "sentience" are terms mired in philosophical bullshit. We do not have an operational definition of either.
We have no agreement on what either term really means, and we definitely don't have a test that could be administered to conclusively confirm or rule out "consciousness" or "sentience" in something inhuman. We don't even know for sure if all humans are conscious.
What we really have is task specific performance metrics. This generation of AIs is already in the valley between "average human" and "human expert" on many tasks. And the performance of frontier systems keeps improving.
"Consciousness" seems pretty obvious. The ability to experience qualia. I do it, you do it, my dog does it. I suspect all mammals do it, and I suspect birds do too. There is no evidence any computer program does anything like it.
The definition of "featherless biped" might have more practical merit, because you can at least check for feathers and count limbs touching the ground in a mostly reliable fashion.
We have no way to "check for qualia" at all. For all we know, an ECU in a year 2002 Toyota Hilux has it, but 10% of all humans don't.
Totally agree. It even (usually) gets subtle meanings from my often hastily written prompts to fix something.
What really occurs to me is that there is still so much can be done to leverage LLMs with tooling. Just small things in Claude Code (plan mode for example) make the system work so much better than (eg) the update from Sonnet 3.5 to 4.0 in my eyes.
I suspect most people envision AGI as at least having sentience. To borrow from Star Trek, the Enterprise's main computer is not at the level of AGI, but Data is.
The biggest thing that is missing (IMHO) is a discrete identity and notion of self. It'll readily assume a role given in a prompt, but lacks any permanence.
The analogy I like to use is from the fictional universe of Mass Effect, which distinguished between VI (Virtual Intelligence), which is a conversational interface over some database or information service (often with a holographic avatar of a human, asari, or other sentient being); and AI, which is sentient and smart enough to be considered a person in its own right. We've just barely begun to construct VIs, and they're not particularly good or reliable ones.
One thing I like about the Mass Effect universe is the depiction of the geth, which qualify as AI. Each geth unit is not run by a singular intelligent program, but rather a collection of thousands of daemons, each of which makes some small component of the robot's decisions on its own, but together they add up to a collective consciousness. When you look at how actual modern robotics platforms (such as ROS) are designed, with many processes responsible for sensors and actuators communicating across a common bus, you can see the geth as sort of an extrapolation of that idea.
No you are not the only one. I am continuously mystified by the discussion surrounding this. Clause is absolutely and unquestionably an artificial general intelligence. But what people mean by “AGI” is a constantly shifting, never defined goalpost moving at sonic speed.
It can plan and take actions towards arbitrary goals in a wide variety of mostly text-based domains. It can maintain basic "memory" in text files. It's not smart enough to work on a long time horizon yet, it's not embodied, and it has big gaps in understanding.
But this is basically what I would have expected v1 to look like.