Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I totally see your point and my purpose is definitely not to be alarmist and sound the alarm that skynet is about to come out of AlphaGo or some other equivalent neural net. But I think the opposite attitude is also false.

As others have pointed out, we don't really know how the brain works. Neural nets represent one of our best attempts to model brains. Whether or not it's good enough to create real intelligence is completely unknown. Maybe it is, maybe it's not.

Intelligence appears to be an emergent property and we don't know the circumstances under which it emerges. It could come out of a neural network. Or maybe it could not. The only way we'll find out is by trying to make it happen.

Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.

This is Hacker News, not a mass newspaper, so I think we can take the more nuanced and complex view here.



>> Neural nets represent one of our best attempts to model brains.

See now that's one of the misconceptions. ANNs are not modelled on the brain, not anymore and not ever since the poor single-layer Perceptron which itself was modelled after an early model of neuronal activation. What ANNs really are is algorithms for optimising systems of functions. And that includes things like Support Vector Machines and Radial Basis Function networks that don't even fit in the usual multi-layer network diagram particularly well.

It's unfortunate that this sort of language and imagery is still used abundantly, by people who should know better no less, but I guess "it's an artificial brain" sounds more magical than "it's function optimisation". You shouldn't let it mislead you though.

>> Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.

I don't agree. It's a subject that's informed by a solid understanding of the fundamental concepts - function optimisation, again. There's uncertainty because there's theoretical limits that are hard to test, frex the fact that multi-layer perceptrons with three neural layers can learn any function given a sufficient number of inputs, or on the opposite side, that non-finite languages are _not_ learnable in the limit (not ANN-specific but limiting what any algorithm can learn) etc. But the arguments on either side are, well, arguments. Nobody is being "blind". People defend their ideas, is all.


Convolutional neural nets are the most accurate model of the ventral stream, numerically speaking. See work by Yamins, DiCarlo etc.


We don't really know how AI works either. NNs (for example) do stuff, and sometimes it's hard to see why.

>Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.

Not really. Right now it's taking the position that there is no practical path that anyone can imagine from a go-bot, which is working in a very restricted problem space, to a magical self-improving AI-squared god-bot, which would be working in a problem space with a completely unknown shape, boundaries, and inner properties.

Meta-AI isn't even a thing yet. There are some obvious things that could be tried - like trying to evolve a god-bot out of a gigantic pre-Cambrian soup of micro-bots where each bot is a variation on one of the many possible AI implementations - but at the moment basic AI is too resource intensive to make those kinds of experiments a possibility.

And there's no guarantee anything we can think of today will work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: