Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a similar view to the emerging theory of Bayesian Brain, which views the brain as a system that tries to minimise the prediction error (which might be the same thing as "free-energy" in some related publications) by comparing expectations with actual information coming from the senses.

https://towardsdatascience.com/the-bayesian-brain-hypothesis...

So far it seems that it explains quite a lot of data, and many mind illnesses (e.g. many diseases can be thought as the brain under-correcting or over-correcting for the prediction error).

By under-correcting, the brain is not learning enough on its mistakes, which may lead to delusions of superiority (e.g. being stuck in usual habits, or inability to change one's world-view based on new information). On the other hand, when over-correcting, the world may seem unpredictable, frightening - leading to self-doubt, anxiety and negative thoughts.

Being wrong around 15% of the time might actually be the optimal rate for learning... https://www.independent.co.uk/news/science/failing-study-suc...



There is a fascinating Bayesian explanation for schizophrenia: the story goes that schizophrenic people have a much sharper prior/posterior than non-schizophrenic people, which makes it more difficult for them to correct their internal models when the environment diverges from predictions. Which causes them to drift off into their own realities.

For example, if you run the rubber hand experiment with non-schizophrenic people, even if you don't stroke their hand and the rubber hand at the exact same time (say the timing offset is gaussian with standard deviation sigma), with enough repeated exposures to the stimuli they will recognize the rubber hand as their own. In contrast, if you repeat the same experiment with schizophrenic people, it takes a smaller standard deviation or substantially more trials to have them recognize the rubber hand as their own.

I wish I had the references lying around, but I dug into the literature for this a few years back and found this hypothesis to be surprisingly well supported.


I agree; Karl Friston's work is among the most interesting I have ever read, period. Interestingly, his 2009 paper (https://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20prin...) / free-energy principle makes use of reinforcement learning, gradient descent, Markov blankets, Helmholtz machines, and other foundational tenets of modern machine learning ... In that regard, Geoff Hinton (a foundational figure in modern machine learning) overlapped with Friston while Hinton was in England at that point in his career.


Yes, interesting man. I encountered him on a workshop by Bert Kappen on stochastic optimal control. It shows that there are different control strategies for different noise levels separated by phase transitions.

I checked Friston again. He now also has this article: https://www.frontiersin.org/articles/10.3389/fncom.2012.0004...

CLE = Conditional Lyapunov Exponents.

"In short, free energy minimization will tend to produce local CLE that fluctuate at near zero values and exhibit self-organized instability or slowing."

I've to study it more what he means with self-organized instability.


Oh interesting - thanks for the pointers.


Note that Geoffrey Hinton's first Restricted Boltzmann Machines were designed to minimize free energy. The first Restricted Boltzmann Machine, however, was Paul Smolensky's Harmonium. It maximized a metric called harmony, which was essentially the inverse of free energy. When Hinton and Smolensky collaborated with Rummelhart on a publication, they settled on calling it "goodness of fit".

My point is that saying that the brain is maximizing harmony is quite reasonable -- and much easier to understand.

Rumelhart, D. E., Smolensky, P., McClelland, J. L., & GE, H. (1986). Schemata and Sequential Thought in PDP Model. PDP, Exploration in the Microstructure of Cognition, The MIT Press, Cambridge, MA, Vol. IIº.


"the brain" Can any of this grand top-down 'delusions and personality traits are thermodynamic xyz' theorising about "the brain" apply to, say a bee's brain or a worm's?


Two facile answers:

1) Yes. Why couldn't it?

2) No, it requires a certain level of brain complexity.


Regarding complexity: Probably anything with a brain has the problem of balancing lots of different sources of information and maintaining (enough) coherence of behavior. So even very simple worms will need a simple version of this...


Hard to identify delusions in worms. But the entropy / free energy models work for the best understood worm brain (C. Elegans).


Specifically, "Signatures of criticality in a maximum entropy model of the C. elegans brain during free behaviour" <http://cognet.mit.edu/proceed/10.7551/ecal_a_010> and I'm sure many more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: