Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article presents an emerging architectural hypothesis of the brain as a biological implementation of a Universal Learning Machine.

Looked in the section titled "Universal Learning Machine", I looked at the footnotes (easy, there are none), I googled and used Google Scholar. I found no coherent definition of Universal Learning Machine.

I mean, the section I mentioned says: "An initial untrained seed ULM can be defined by 1.) a prior over the space of models (or equivalently, programs), 2.) an initial utility function, and 3.) the universal learning machinery/algorithm. The machine is a real-time system that processes an input sensory/observation stream and produces an output motor/action stream to control the external world using a learned internal program that is the result of continuous self-optimization." But it's using other vaguely defined concepts in a fairly vague fashion.

What the author is defining is kind of like a Godel Machine [1] or Symbolic Regression[2], to give two more concrete references than I've found in the text (well, I'm only skimming).

The key defining characteristic of a ULM is that it uses its universal learning algorithm for continuous recursive self-improvement with regards to the utility function (reward system).

And there the author gets much more specific and the claim is much more debatable. Of course, if you leave "continuous" vague, then you have something vague again. If you're loose enough, the brain, by your loose definition, has utility function. But that easily be true but not useful. Every at least macro physical system can be predicted by solving it's Lagrangian but the existence of many, many intractable macro physical system just implies many, many unsolvable or unknown or unknowable Lagrangians.

I think the problem with outlines like this, that I think are somewhat typical for broad-thinker/amateurs, is not that it's a priori bad place start looking at intelligence. It might be useful. But without a lot of concrete research, you wind-up seemingly simple steps like "We just maximize function R" when any know method for such maximization would take longer than the age of the universe (problem of a Godel Machine). Which again, isn't necessarily terrible - maybe you have an idea how to much more simply approximately maximize the function in much less time. But you know what you're up against.

I present a rough but complete architectural view of how the brain works under the universal learning hypothesis.

Keep in mind that to claim a rough outline of how the brain operates is claim more than the illustrious neuroscientist of today would claim.

[1] https://en.wikipedia.org/wiki/G%C3%B6del_machine [2] https://en.wikipedia.org/wiki/Symbolic_regression



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: