Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes and no - the brain is also incredibly parsimonious with respect to how little resources it uses to achieve the information processing power it has. If you could make a computer which could compete in terms of utility, energy usage, and size, you'd be a billionaire in no time.

It's probably true that you could imagine a "perfectly designed" brain which could perform better on some tasks with less complexity, but I think it's also true that there's been a lot of selection pressure towards increased intelligence, so this is probably fairly well optimized.

> why don’t we have a normal abstraction for sending signals? Instead, we have like 10s of slightly different ones with different failings each, but each having many repetitive machinery leading to inefficient “spaghetti code”.

What do you mean exactly by this? Like different neurotransmitter systems? Because I think it's actually quite elegant how the properties of different neurohormones lead to different processing modalities. It's like we have purpose-built hardware on the scale of individual proteins specialized for different purposes. I'm not so sure a more homogenized process for neural signaling would be an improvement.



John von Neumann wrote an essay on the topic titled The computer and the brain, which is quite a good comparison between the two types of systems, even though knowledge of the two was pretty primitive at the time. The basic idea is that computers are multiple orders of magnitude faster at serial calculations, but brains offset this difference by the sheer number of “dumb” processing units, with insane number of interconnections. Also, I don’t think that comparing the training of a neural network to the brain is fair from an energy usage point of view - compare the usage of the final NN with it.

As for how optimized is the human brain, well good question. I think not even a single biological cell is close to efficient, at most it is at a local minima. The reason is perhaps that “worse is better” in terms of novel functionality. But I don’t think there was a big evolutionary pressure on sufficient intelligence once it emerged - it is sort of a first past the post wins all.


> John von Neumann wrote an essay on the topic titled The computer and the brain

I have to be honest, I would take any such comparison from the 1950's with a huge pinch of salt. I think perceptions about how "dumb" an individual neuron is as a processing unit have shifted quite a bit since then.

> Also, I don’t think that comparing the training of a neural network to the brain is fair from an energy usage point of view - compare the usage of the final NN with it.

I'm not considering this in terms of the training efficiency, I'm looking at it in terms of the ratio between operational utility and energy used. There's no trained ANN with anything remotely close to the overall utility of the human brain at any scale, let alone one that weighs 3lbs, fits into a human skull and runs on 20 Watts of power.


I only mentioned that essay because I think the fundamental vision of it is still correct — in serial computations silicone beats “meat” hands down. And that is in both power efficiency and performance.

The fundamental difference between our current approach and biological brains is just as much a hardware one as it is theoretical. CPUs and GPUs are simply not best fit for this sort of usage — a “core” of them is way too powerful for what a single neuron can do (even with the more correct belief that they are not as dumb as we first thought), even if they can calculate multiple ones simultaneously. I’m not sure of specifics but couldn’t we print a pre-trained NN to a circuit that could match/beat a simple biological neural network in both speed and power efficiency? Cells are inefficient.


> in serial computations silicone beats “meat” hands down. And that is in both power efficiency and performance.

I just don't think this is a meaningful comparison, and I'm not convinced it's evidence of the "limitations" of biological computation.

Silicone beats biology in doing binary computation because they're a single-purpose machine built for this task. But a brain is capable of serving as a control system to operate millions of muscle fibers in parallel to navigate the body smoothly in unpredictable 3D space, while at the same time modulating communication to find the right way to express thoughts and advance interests in complex and uncertain social hierarchies, while at the same forming opinions about pop-culture, composing sonnets, falling in love and contemplating death.

For me to buy the argument that ANN's can be more efficient than biology, you'd have to show me a computer which can do all of that using less resources than the human brain. Currently we have an assembly line for math problems.

> a “core” of them is way too powerful for what a single neuron can do

I just think you're vastly under-counting the complexity of what happens inside a single neuron. At every synapse, there's a complex interplay of chemistry, physics and biology which constitutes the processing of the neurotransmitter signal from the presynaptic neuron. To simulate a single neuron accurately, we actually need all the resources of a very powerful computer.

So it may be the case that we can boil down intelligence to some kind of process which can be printed in silicon. But I think it's also entirely likely that the extreme parallelism (vast orders of magnitude greater than the widest GPU) of the brain is required for the kind of general intelligence that humans express, and the "slowness" of biological computation is a necessary trade-off for the flexibility we enjoy. If that's the case, it's going to be very hard for a serial computer to emulate intelligence.


I by no means say that our brain is not impressive - even a fly’s is marvelously complex and capable. But all of them are made up from cells that were created through evolution, not intelligent design. The same way the giraffe has a recurrent nerve going all the way down and up inside its neck for absolutely no reason other than evolution modifying only one factor (neck length) without restructuring, cells have many similar sorts of “hacks”. So I think it is naive to think that biological systems are efficient. They do tend to optimize for a local minima, but there are inherent hard limits there.

Also, while indeed we can’t simulate the whole of neuron, why would we want to do that? I think that is backwards. We only have to model the actually important function of a neuron. If we were to have a water computer, would it make sense to simulate fluid dynamics instead of just the logical gates? Due to the messiness of biology, indeed some hard to model factors will effect things (in the analogy, water will be spilt/evaporated) but we should rather overlook the ones that have a minimal influence on the results.


> while indeed we can’t simulate the whole of neuron, why would we want to do that? I think that is backwards. We only have to model the actually important function of a neuron.

Yeah so I think this is where we fundamentally differ. It seems like your assumption is that neurobiology is fundamentally messy and inefficient, and we should be able to dispense with the squishy bits and abstract out the real core "information processing" part to make something more efficient than a brain.

So if that's your assertion, what would that look like? What would be the subset of a neuron that we could simulate which would represent that distillation of the information processing part?

Because my argument would be, the squishy, messy cellular anatomy is the core information processing part. So if we try to emulate neural processing with the assumption that a whole neuron is the base unit, we will miss a lot of that micro-level processing which may be essential to reaching the utility and efficiency achieved by the human brain.

I'm not against the idea that whatever brains we happened to evolved are not the most efficient structure possible. But my position would be, we're probably quite far in terms of current computing technology from being able to build something better. I would imagine we might have to be able to bioengineer better neurons if we really want to compete with the real thing, rather than trying so simulate it in software.


I can’t think of any field of research where the model used is completely accurate. At one point we will have to leave behind the messy real world. While a simple weighted node is insufficient for modeling a neuron, there are more complex models that are still orders of magnitudes less complex than simulating every single interaction between the I don’t know how many moles of molecule (which we can’t even do as far as I know, not even on a few molecule basis, let alone at such a huge volume).

But I feel I may be misrepresenting your point now. To answer your question, maybe a sufficient model (sufficient to be able to reproduce some core functionality of the brain, eg. make memories) would be one that incorporates a weight for each sort of signal (neurotransmitter) it can process, complete with a fatigue model per signal type, as well as we can perhaps add the notable major interactions between pathways (eg. activation of one temporarily decreasing the weight of another, but in a way bias is sorta this in the very basic NNs). But to be honest, such a construction would be valuable even with arbitrary types of signals, no need to model it exactly based on existing neurotransmitters. I think most properties interesting from a GAI perspective are emerging ones, and whether dopamine does this and that is an implementation detail of human brains.


> What would be the subset of a neuron that we could simulate which would represent that distillation of the information processing part?

You only need to accurately simulate the input and the output.

Frankly, if that can’t be done with a Markov process I’d be very surprised, and we already know that Markov chains can be simulated with ANNs


So just to unpack this a little - there's a lot of different mechanisms going on in neural computation.

For instance one of those is spike-timing dependent plasticity. Basically the idea is that the sensitivity of a synapse gets up-regulated or down-regulated depending on the relative timing of the firing of the two neurons involved. So in the classic example, if the up-stream neuron fires before the down-stream neuron, the synapse gets stronger. But if the down-stream neuron fires first, the synapse gets weaker.

Another one is synchronization. It appears that the firing frequency of groups of neurons which are - for instance representing the same feature - become temporally synchronized. I.e. you could have different neural circuits active at the same time in the brain, but oscillating at different frequencies.

Another interesting mechanism is how dopamine works in the Nucleus Accumbens. Here you have two different types of receptors at the same synapses: one of them is inhibitory, and is sensitive at low concentrations of dopamine. The other is excitatory, and is sensitive at high concentrations. What this means is, at a single synapse, the same up-stream neuron can either increase or decrease the activation of the down-stream neuron: if the up stream neuron is firing just a little, the inhibitory receptors dominate. But if it's firing a lot, the excitatory receptors take over, and the down-stream neuron starts to activate more. Which kind of connection weight in an ANN can model that kind of connection?

My overall question would be, do you think back-propogation and markov chains are really sufficient to account for all that subtlety we have in neural computation, especially when it comes to specific timing and frequency-dependent effects?


If Markov processes won’t cut it, a Turing machine will. And an ANN can approximate a Turing machine.

To boil it down, if you really want to argue that the behaviour of a neuron can’t be simulated by an ANN, you’re arguing that a neuron is doing something non-computable. At which point you might as well argue it’s magical.


So I think this thread was about two claims:

1. Can ANN's (in their current iteration) achieve general intelligence

2. Can they do it more efficiently than a biological brain

It certainly has not been established that a Turing machine can achieve general intelligence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: