Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What baffles me is the number of humans who think they are in the personal possession of some super special sacred form of magical and unexplainable intelligence. "AI is just stats" yes, indeed, but so is human intelligence. In many ways, AI from 2010 was already better than human intelligence.

Three remarks:

- The task many people seem to be benchmarking against is not just a measure of general intelligence, but a measure of how well AI is able to emulate human intelligence. That's not wrong, but I do find it amusing. Emulating any system within another generally requires an order of magnitude higher performance.

- The degree to which human intelligence fails catastrophically in each of our lives, on a continuous basis, is way too quickly forgotten. We have a very selective memory indeed. We have absolutely terrible judgment, are super irrational, and pretty reliably make decisions that are against our own interests, whether it's with regard to tobacco use, avoidance of physical exercise, or refusal of life-saving medications or prophylactics. We avoid spending time learning maths and science because it's not cool, and we openly display pride in our anti-intellectual behaviours and attitudes. We're all incredibly stupid by default.

- AI researchers need to work more closely with neuroanatomists. The main thing preventing AI from behaving like a human is the different macro structure of human NNs vs artificial NNs. Our brains aren't random assortments of randomly connected neurons: there's structure in there that explains our patterns of behaviour, and that is lacking in even the most modern AI. We can't expect AI to be human if we don't give it human structures.



"We have a very selective memory indeed. We have absolutely terrible judgment, are super irrational, and pretty reliably make decisions that are against our own interests, "

This is a really bad argument - human intelligence is not highly rational, but it is deeply nuanced, using social cues, emotions, instincts and a miriad of other things.

Computers can never be anti-knowledge because they lack the free will and social behavior of humans - they didn't chose to be pro knowledge either.


They most certainly could learn to be that. But we don't want that in a machine.

The human body is also functioning like a machine, there is no magic, just new stuff build upon very old stuff.


It’s not a good argument because it’s not an argument. It’s just intended to be a perspective point.

These things aren’t magical properties of a “higher” intelligence, they’re phenomena that emerge from structure. Give a robot a hindbrain and it will pick up on that type of things.


General intelligence requires it to solve real problems in the real world. It isn't about emulating humans, but emulating anything resembling an intelligent being we are aware of. It would be totally exceptional if we could properly emulate the intelligence of a fly or an ant, but we can't even do that. "Emulate a human brain" you say, but we can't even emulate brains a million times smaller than that.


https://en.m.wikipedia.org/wiki/AnimatLab

We totally do emulate organisms on that scale. The real challenge is simulating the sensory inputs and the feedback loop between the outputs, the environment as the body acts, then new inputs.

Disembodied simulations of nerual networks don't work. They are part of a body, an environment, and all the feedback loops that come with it.

It sounds like you really just want to see a ML algorithm have a body to learn in. Why we ever expect AGI to happen without letting an ML algorithm learn by interacting with a "real" reality seems strange to me. By all means, keep making glorified optic nerve and expecting them to "wake up".


You don't need sensors, you just need a virtual room.

> We totally do emulate organisms on that scale. The

There is no evidence those emulations actually emulates those organisms. They just built a neural net in the same structure and assumes the cells doesn't matter. But cells are really smart and can navigate environments on their own, they are intelligent beings in their own right, and building a flea using a thousand of those is very plausible compared to doing it using neural net of similar size.

And yes, in order to prove that we actually emulated those you need to show that it does the same things in the same scenarios. You don't even need to do everything, just a simple thing like being able to move around, gather material and build a home in a physics engine would be huge.


> You don’t need sensors, you just need a virtual room.

While technically true, I actually think this is way more difficult than it sounds, bordering on practical impossibility.

I think the other commenter was making a really important point. The simulated environment would need to be incredibly rich, to a point as to almost defy imagination.

Consider what happens to a human mind when confined in a box (prison) with limited opportunities for stimulation. There’s a room, a gym, other people with which to socialize, food, walls, an outdoors enclosure... And yet someone who spends their entire life in this type of environment will certainly be facing serious neurodevelopmental issues.

For human/mammal order of AI, I would even argue that simulating adequate inputs might actually be a more difficult problem than building the AI that responds to them!


Have you seen these C. elegans emulations in a robot body?

https://www.youtube.com/watch?v=YWQnzylhgHc

https://www.youtube.com/watch?v=xu_oYLmPX9U


Also note, organisms not only navigate the environment, they interact with it, handle their own capture and consumption of energy from it,and reproduce. Autonomously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: