Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This makes sense if we're trying to recreate a human mind artifically, but I don't think that's the goal?

There's no reason an equivalent or superior general intelligence needs to be similar to us at all



There's no reason to the idea "superior intelligence". Nobody can say what that means, except by assuming that animal intelligence is the same category as the kind we want and differs from human intelligence in degree rather than qualitatively, and then extrapolating forward from this idea of measuring intelligence on the intelligence meter that we don't have one of.

Besides which we already defined "artificial intelligence" to mean non-intelligence: are we now going to attain "artificial general intelligence" by the same process? Should we add another letter to the acronym, like move on to "genuine artificial general intelligence"?


Is there really no agreement to what intelligence refers to? I've seen it defined as the ability to reach a goal, which was clear to me. Eg. a chess AI with 1500 ELO is more intelligent than one at 1000


That's capability, intelligence can also be how quickly it learned to get to that capability.

Consider the difference in intelligence between a kid who skipped five years of school vs one who was held back a year: if both got the same grade in the end, the one who skipped five years was smarter.


makes sense. Maybe a combination of both would be most accurate - how fast you can learn + what's your peak capability

Looking at it solely on rate of learning has LLMs way smarter than humans already which doesn't seem right to say


> Looking at it solely on rate of learning has LLMs way smarter than humans already which doesn't seem right to say

Sure, but "rate" also has two meanings, both useful, but importantly different: per unit of wall-clock time, and per example.

Transistors are just so much faster than synapses, that computers can (somewhat) compensate for being absolutely terrible by the latter meaning — at least, in cases where there's enough examples for them to learn from.

In cases where the supply of examples is too small (and cannot be enhanced with synthetic data, simulations and so on), state of the art AI models still suck. In cases where there is sufficient data, for example self-play in games of chess and go, the AI can be super-human by a substantial margin.


LLMs are trained on human data, and aimed at perform tasks in human roles. That's the goal.

It is supposed to be super, but superhuman. Able to interact with us.

Which leads us to the Turing Test (also, not a test... "the imitation game" is more of a philosophical exploration on thinking machines).

My comment assumes this is already understood as Turing explained.

If the thing is not human, then there's absolutely no way we can evaluate it. There's no way we can measure it. It becomes an impossible task.


What's wrong with measuring and evaluating its outputs directly? If it can accurately file taxes better than us does it matter if it does it in a human manner?

Birds and planes both fly and all


If your definition of AGI is filing taxes, then it's fine.

Once we step into any other problem, then you need to measure that other problem as well. Lots of problems are concerned with how an intelligent being could fail. Our society is built on lots of those assumptions.


For _investment_ purposes the definition of AGI is very simple. It is: "to what extent can it replace human workers?".

From this perspective, "100% AGI" is achieved when AI can do any job that happens primarily on a computer. This can be extended to humanoid robots in the obvious way.


That's not what AGI used to mean a year or two ago. That's a corruption of the term, and using that definition of AGI is the mark of a con artist, in my experience.


I believe the classical definition is, "It can do any thinking task a human could do", but tasks with economic value (i.e. jobs) are the subset of that which would justify trillions of dollars of investment.


I don't see how that changes anything.

Failing like a human would is not a cute add-on. It's a fundamental requirement for creating AIs that can replace humans.


Industrial machines don't fail like humans yet they replaced human workers. Cars don't fail like horses yet they replaced them. ATMs don't fail like bank tellers... Why is this such a big requirement?


Microwaves didn't replace ovens. The Segway didn't replaced bikes. 3D movies didn't replace IMAX. I can go on and on...

Some things fail, or fail to meet their initial overblown expectations.

The microwave oven was indeed a commercial success. And that's fine, but it sucks at being an oven. Everyone knows it.

Now, this post is more about the scientific part of it, not the commercial one.

What makes an oven better than a microwave oven? Why is pizza from an oven delicious and microwave pizza sucks?

Maybe there's a reason, some Maillard reaction that requires hot air convection and can't be replicated by shaking up water molecules.

We are talking about those kinds of things. What makes it tick, how does it work, etc. Not if it makes money or not.

Damn, the thing doesn't even make money yet. Why talk about a plus that the technology still doesn't have?


The thread we're in was arguing that the requirement to be AGI is to fail the exact same way humans do. I pointed out by showing these examples that failing the exact same way is not a requirement for a new technology to replace people or other technology. You're reading too much into what I said and putting words in my mouth.

What makes it tick is probably a more interesting question to me than to the AI skeptics. But they can't stop declaring a special quality (consciousness, awareness, qualia, reasoning, intelligence) that AI by their definition cannot ever have and that this quality is immeasurable, unquantifable, undefinable... This is literally a thought stopper semantic deadend that I feel the need to argue against.

Finally, it doesn't make money the same way Amazon or Uber didn't make money for a looong time, by making lots of money, reinvesting it and not caring about profit margins for a company in its growth stage. Will we seriously go through this for every startup? It's already at $10-20b a year at least as an industry and that will keep growing.


AGI does not currently exist. We're trying to think what we want from it. Like a perfect microwave oven. If a company says they're going to make a perfect microwave oven, I want the crusty dough and delicious gratin cheese effect on my cooked focaccia-inspired meals.

What exists is LLMs, transformers, etc. Those are the microwave oven, that results in rubbery cheese and cardboard dough.

It seems that you are willing to cut some slack to the terrible microwave pizza. I am not.

You complained about immensurable qualities, like qualia. However, I gave you a very simple measurable quality: failing like a decent human would instead of producing jibberish hallucinations. I also explained in other comments on this thread why that measurable quality is important (it plays with existing expectations, just like existing expectations about a good pizza).

While I do care about those more intangible characteristics (consciousness, reasoning, etc), I decided to concede and exclude them from this conversation from the get-go. It was you that brough them back in, from who-knows-where.

Anyway. It seems that I've addressed your points fairly. You had to reach for other skeptic-related narratives in order to keep the conversation going, and by that point, you missed what I was trying to say.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: