Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a really interesting paper.

The discussion about the importance of decoders strikes me as a parallel to the human eyes, ears and other sensory organs. We actually dont have a good grasp of what our eyes see, they just see (produce data and relay it) and children figure out what is what.

I guess AGI will be achieved when we can sit a program in a simulated world with completely fabricated input and get a general intelligent program out. Maybe we’re in that simulation right now.



I would say your last paragraph is a complete misreading of the paper. One of the central points that it opens with is that "AGI" requires situated, or embodied, intelligence. It needs to be able to operate within, and upon, a physical world.


Not to be overly snarky, but I'd argue that your comment here is a failure of creativity.

If the AGI mechanism can learn from a real world, it can learn from a simulated one (that it can similarly operate within and act upon) -- and in fact that can cut down the time it would take to train the AGI from years/decades (humans) by many orders of magnitude.

We already see things like this in robotics environments, it's a matter of fidelity/simulation quality. Even without perfect quality, if the mechanism of learning is correct, you'd get an intelligence with incomplete ideas/intelligence, not a completely different thing.


Have you ever worked with computers controlling physical world mechanisms? Or ever done any electrical wiring, or plumbing, in an existing house?


my personal opinion is that the intelligence ceiling scales with the complexity of the environment it operates in. this means that we could theoretically get something resembling AGI in a simulated environment complex enough. in practice, simulating an environment complex enough would be extremely inefficient compared to just using the (computationally free) real physical world.

also, the intelligence itself is shaped by the environment it operates in, so it would turn out to be more human-like the closer its operating environment is to our human physical world. this also means intelligences not trained in our physical world (or a convincingly close simulation of it) won't be human-like, but rather a very alien. moreover, i'm not sure that even an intelligence trained in the physical world with human-like sensory inputs will necessarily turn out human-like. there might be a case for convergent evolution (i.e. mammalian intelligence to be global optimum-ish), but i think human intelligence will only have a chance to emerge if everything, from the operating environment to the machine body and neural structure will resemble a human to the point where there is no difference in the human and the machine at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: