Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is Information in the Brain Represented in Continuous or Discrete Form? (arxiv.org)
198 points by gballan on May 22, 2018 | hide | past | favorite | 67 comments


Rushton (1961) concluded that the neural signaling of a typical human myelinated nerve fiber spanning, say, between a finger and the spinal cord cannot employ a continuous representation due to the presence of noise. Despite these seminal works, computational models based on continuous representation dominate present day neuroscience literature – for example, continuous attractor networks (Eliasmith, 2005; Wang, 2009).

Sure, we know from basic information theory that any noisy system is inherently discrete. However, depending on the magnitude of the noise, I don't see why we should not describe models in terms of continuous variables.

As an analogy, consider the average numerical computer program. We often derive the underlying math in terms of real-valued vector spaces. However, when implementing the program on a computer system, all variables are ultimately discrete. A reason for not thinking about our problems in terms of discrete objects is that math just tends to get incredibly complicated as soon as objects are discrete; and that the computational substrate is fine-grained enough (in most cases).

Similarly, if noise limits the effective resolution of individual signals or representations in the brain to 5 bits (number taken from the paper), that (in my opinion) still does not mean that we should stop describing computational neuroscience models in continuous spaces -- at least not, if they are validated with the empirically measured amount of noise; which is exactly what Eliasmith's lab (cited above) does (full disclosure: I'm one of his students). Furthermore, as soon as you code information in populations of neurons, the noise on individual connections becomes less important; one could argue that whenever the brain needs precise computations, it dedicates more neural resources to that problem; then, the noise will "average out".


>I don't see why we should not describe models in terms of continuous variables.

You're being way too polite in your wording, as your next paragraph states, every field that uses computers has done it since the computers were invented. Allow me to push it further: AFAIC almost all hand calculated math is discrete. The numbers you can write on a page are countable, in fact finite, so all actually calculated math is "discrete" in some sense. No one every really touches the full continuum of R other than abstractly: real calculations whenever you truncate pi or sqrt(2), or generally calculate with a fixed set of digits, you are doing _discrete_math_. So yes, Q is dense in R which helps, and you could write out more digits if you really wanted to, but even then, people restrict themselves to small, finite subsets of Q and even resorting to using things like logarithms/order of magnitude to keep that set as small as possible since our brains can't take all the hairiness. But still, that set is large enough to still do the things we care about anyway.

However, when people want to think abstractly rather than explicitly (calculating), it's easier to take the limit as the gaps go to zero and deal with clean, C^\infty functions. I mean, that's literally what people mean (at least how my experimental physicist brain thinks it does) when they say analysis/calculus is about approximations. I deal with plasma in the hot, nonquantum limit so my ions and electrons are in fact discrete particles. Whenever I calculate an electric field or a magnetic field for the distribution, I ignore the fact that there will be noise on the scale of individual particles and replace that noisy function with a clean smooth function, which tends to be a good approximation. In fact, this is how almost every classical physics problem is solved, you ignore the structure on the particle level and pretend it is "continuous". Same with materials engineering, and so on.


Forgive my ignorance, but isn’t it true that we don’t know what elementary particles are made of? In other words, doesn’t it appear that matter is both continuous and discrete, and that we could concievably find particles that comprise elementary particles... and so on? Is there a name for this paradox?


I mean, the standard model says they are fundamental, although there are theories that they may be "made up" of other things like in string theory, although those theories have (imo) struggled to compare to experiment or demand experiments that are infeasible today. Regarding "continuous and discrete", quantum mechanically, electrons aren't definitein space, you might be referring to that. In plasma physics we sit above the quantum limit (neglecting p and x variance or more correctly, the variances' product well exceed \hbar), so we treat them discretely, so in a sense, as all classical physics is, it's an approximation too.

My point is that even in classical physics, I usually don't care about fluctuations on small scales which will be noisy, so on top of the classical approximation I make another approximation where I replace a noisy function with a smoother function that well approximates the noisy one. Smoothing out the noise is an important tool for theoretical understanding (as OP's student pointed out here), but it's important to remember it's just an approximation.

EDIT: re the other replier. Another example is I treat ions as "fundamental" too, as we don't reach energies and conditions where their constituent nuclei matter, only ionization.


Well, it's not a paradox as described. Just bad naming on our part, to prematurely call the first particles we discovered "elementary" without first waiting to see if they have more fundamental particles below them.


> The numbers you can write on a page are countable, in fact finite, so all actually calculated math is "discrete" in some sense.

Wouldn't it be more accurate to say you're simply working in a discrete _set_?


To push the point slightly further: Floats, while realish, still have a discrete representation under the hood... We're still doing math (or at least, calculations) on a finite subset of the reals.


Even continuous functions are replaced with symbols that are finite and countable. They are manipulated with a set of countable operations and a finite set of steps.


> As an analogy, consider the average numerical computer program. We often derive the underlying math in terms of real-valued vector spaces. However, when implementing the program on a computer system, all variables are ultimately discrete. A reason for not thinking about our problems in terms of discrete objects is that math just tends to get incredibly complicated as soon as objects are discrete; and that the computational substrate is fine-grained enough (in most cases).

When doing computations that require a lot of precision, one actually needs to consider the fact that floats/doubles are discrete and with limited precision, and handle that in the code. Typical examples may be numerical analysis, physical simulations, game world code (for assuring consistent state for all connected players), and geometrical algorithms.

Fortunately for most programmers, they do not need that level of precision in their code.


What about the computable reals? Our computer programs are, in a sense, capable of emitting the digits of any computable real number, and we have arithmetic (Gosper continued-fraction arithmetic) on those digits.

The fact that values might be discrete does not change that programs might be continuous over their output range; functions from computable reals to computable reals can be continuous.


Exactly.

The question isn’t whether something is discrete so much as whether it is sampling a continuous function. Can you have arbitrarily large jumps, or not?

See Buridan’s Principle :)


> As an analogy, consider the average numerical computer program. We often derive the underlying math in terms of real-valued vector spaces. However, when implementing the program on a computer system, all variables are ultimately discrete.

You're absolutely right, and this is a great analogy.


It frustrates me that the engineering community has so little recollection of its own history that it doesn't even contextualise /this/ question as one of the founding problematics of its own specialism, and our cybernetic age in general. The question of whether the brain functions discreetly ('digitally') or continuously ('analogue') was one of the three main debates at the Macy Meetings in the 1940s. [0]

For example Warren McCulloch and Walter Pitts effectively staked their careers on the brain being a fully digital network, as a computer. But John Von Neumann was more cautious, arguing that the digital function of the brain rested on a chemical, analogue foundation, and that it was unclear whether messages were coded in a digital or analogue way. Julian Bigelow argued that mathematicians and physicists preferred to ignore the biological structure of neurons and identify them with their digital operation.

This debate strikes at the heart of the debate between the difference between analogue and digital, and in my opinion it's a philosophical, not just physiological, problem.

[0] For anyone interested in this history I'd highly recommend Ronald R. Kline's work, especially The Cybernetic Moment (p 46-47)


Why not both? Left neocortex hemisphere handles discrete, right handles continuous. Iain McGilchrist makes a convincing argument for that in his book The Master and His Emissary.

Also, the main difference between 'hardware' and 'wetware' arises as a consequence of a very simple cybernetic principle, to quote Wikipedia:

"Also in 1960, [Manfred Clynes] discovered a biologic law, "Unidirectional Rate Sensitivity," the subject, in 1967, of a two-day symposium held by the New York Academy of Science. This law, related to biologic communication channels of control and information, is basically the consequence of the fact, realized by Clynes, that molecules can only arrive in positive numbers, unlike engineering electric signals, which can be positive or negative. This fact imposes radical limitations on the methods of control that biology can use. It cannot, for example, simply cancel a signal by sending a signal of opposite polarity, since there is no simple opposite polarity. To cancel, a second channel involving other, different molecules (chemicals) is required. This law explains, among other things, why the sensations of hot and cold need to operate through two separate sensing channels in the body, why we do not actively sense the disappearance of a smell, and why we continue to feel shocked after a near-miss accident."

See also here:

https://upload.wikimedia.org/wikipedia/en/1/19/NYT3.jpg

Also, fun fact about the Macy conferences that a friend of mine pointed out to me recently:

According to Hayles, neither the term nor the word “reflexivity” exists anywhere in the Macy foundation transcripts, which means no one introduced it to cybernetics from 1946-1953.


> Why not both? Left neocortex hemisphere handles discrete, right handles continuous. Iain McGilchrist makes a convincing argument for that in his book The Master and His Emissary.

I haven't read the book, but this sounds totally absurd. What kind of evidence is there for this?


That question has a very complex and complicated answer, hence why McGilchrist devoted the entire first half of the book to it, I can hardly summarize it here. You might want to watch the RSA Animated talk by McGilchrist:

http://www.youtube.com/watch?v=dFs9WO2B8uI

If this reminds you of Julian Jaynes, then yes, there exists some similarity, but, to quote Wikipedia:

"McGilchrist, while accepting Jayne's intention, felt that Jayne's hypothesis was "the precise inverse of what happened" and that rather than a shift from bicameralism there evolved a separation of the hemispheres."

In my opinion, it seems like a better question to ask then consists of asking if what McGilchrist describes holds 'all the way down', and I lack an answer to that.


Interestingly the DNA structure was discovered in the 1950s. In what ways could it have shaped that discussion?


Historically speaking, I'm not sure, but I'd imagine the same debate would be had there too. This isn't just an epistemological question, but the product of multiple scientific revolutions (the return of atomism in the late 19C, quantum mechanics, neurophysiology, cybernetics...), so, to take a Kuhnian position, the innovations of the 1950s especially are well within the same epistemological regime.


Mendel already showed the discreteness of genes, without knowing the structure of DNA, in the 1860s. And I believe Darwin, though not aware of Mendel's work, realised that continuous genes would be a problem for his theory.


An interesting statement here:

'One answer, as outlined by VanRullen and Koch (2003), is that continuous representation “cannot satisfactorily account for a large body of psychophysical data”. For example, 1 cent does not typically have much value to most people. However, a person may decide to buy a product if priced at $1.99 – yet, refuse to buy the same product if priced 1 cent higher at $2.00. Such an abrupt (or step) change in the brain’s purchasing decision cannot be modeled using a continuous representation despite extensive attempts to do so (Basu, 1997).'

The tacit assumption made here seems to be that our concept of number is hard-coded into our brains at the physical signal level, and so that when we think of the number 2, some part of our brain generates a physical signal that has the quantity of 2 (absent that assumption, why would one think that this experiment tells us anything about how signals are physically encoded in the brain?)

AFAIK, it is generally thought that our concept of numbers is at a more abstract, symbol-manipulation level (especially when, as in this example, fractions are involved.) It seems to me that this paradox (which looks like a variant of the sorites paradox, e.g. what is the minimum size of a heap of sand?) is resolved if we consider peoples' tendency to disregard the pennies in a monetary amount.

Caveat / mea culpa: I have not read the papers referenced in the quote.


Perhaps it is an "abstraction bug" that results in existing discrete handling math not coming into play? We do not reject a glass being slightly below a filling ring as being ripped off but would to being charged 10 percent more. Cents are modeled as a second variable instead of part of a whole for a variety of reasons. Brains tend to learn to skip "irrelevant" steps with experience. Thus we catch "shortcuts" like looking at only the dollar amount and not the trailing 99 cents. If we were dealing with say metal dust as currency we would be unimpressed by slightly smaller payment scoops as discounts.


Yeah, as a layman on the biological side of this paper, some of the reasoning feels circular or to embed additional assumptions.

The 1.99 ~= 2.00 is one example. The authors intend to determine whether any continuous model could account for a difference in behavior, but their example only rules out models where that one cent difference is considered trivial. A continuous model might be preturbed in a way that makes those prices subjectively far apart.

The same goes for the information retrieval task. They are saying that biology can't perform a discreet classification task without having a discreet representation somewhere.


Alternatively, people tend to simply ignore the sub dollar parts of prices as largely meaningless information.


> AFAIK, it is generally thought that our concept of numbers is at a more abstract, symbol-manipulation level

Which is a necessary abstraction to inhibit the aforementioned fallacy. Not that I know any particulars or anything.


It's really surprising to me that people still fall for the $1.99 trick. When I see any price I instantly round it up to the nearest dollar, hundred, thousand, etc. It makes comparisons so much easier!


I bet you don't. In the sense that if you were under extreme surveillance and someone analyzed all your purchasing decisions there would be a detectable difference in your behavior based on the "99 cent trick".

It's a pretty well known cognitive bias to overestimate ones resistance to cognitive biases.


> we show that information cannot be communicated reliably between neurons using a continuous representation

It seems like this doesn't take into account spiking frequencies (where the frequency is a continuous variable), nor the potential for signals with different average magnitudes (even if each signal is inconsistent due to the presence of noise) to have a statistically significant effect over many iterations.


In my view, there seems to be some confusion in the paper between 'discrete' and 'digital': "Furthermore, in the present work, the terms continuous and analog are treated as equivalent in an engineering sense, as are the terms discrete and digital.' [my emphasis]. A similar conflation is seen in several of the quotes from other work in the introduction of the paper.

I think there is a distinction between 'discrete', where the signal is encoded as a non-continuous physical property, and 'digital', which adds the concept of place value to discreteness. This distinction is relevant, for example, where the authors contrast the ability of digital recordings to resist the degradation that analog recordings suffer from. That resistance, however, depends quite substantially on error-correcting codes, for which place-value matters.


My intuition is that it's represented ultimately in a discrete form, but in a symbolically-compressible way, such that continuities can be represented extremely cleanly.

Let's take a contrived example. You know that if you press on the gas pedal of the car, the car moves forward. You also know that if you press hard, it jerks. Your brain doesn't represent all the intermediate states between light and hard pressing, it instead represents the continuity between the two symbolically.

We use these continuity representations at speed to do things in the real world, and breaks in those continuities really trip us up, and force us to slow down until a new representation of the continuity can be formed. When it happens to me, it almost feels like I'm "repacking" the information back into my brain.

But if you meditate on how you learn things, one can come to the conclusion that all knowledge 'feels' the same way in mind. For me, this knowledge is discrete, I've even been able to articulate some 'operations' that one can apply directly onto conceptual units.

If you think about it, an "intuitive" understanding of something means precisely this, you understand the system in whole without having to think hard about what it's doing in parts. Outputs are mapped cleanly to inputs in mind.


I very much disagree with most of your examples. Most people don't seem to have a good sense for how associative memories work. The brain is so good at lying to us that we think we have all these discrete pieces of knowledge, but the reality from my studies is less reliable. When we 'learn' something, what's happening is that a bunch of neurons are being stimulated by inputs from all over your body. What ends up happening is you end up getting a neuronal pattern, that is largely repeatable given similar enough inputs (it's why we can roughly see images someone is looking at with fmri brain scans). But there is a problem when we 'learn' something, in that if we are in a different context and many of our sensory inputs are significantly different then we may be completely unable to recall the thing we learned, or be unable to apply that knowledge because not enough of the neurons fired to 'pattern match' our knowledge. What this means, is that if we want an instuitive understanding of something we probably need to have a lot of concrete examples to pull from, in as many different contexts as we can.

To get back to your gas pedal example. I'm fairly positive that the brain needs to map as many of the degrees of hardness of pressing to percieved acceleration as possible to get a good intuition as to the necessary force to achieve a given acceleration. And it's even 'worse' than you realize, because your brain also need to map out pedal depression vs environment (hills and turns effect acceleration), and depression vs car load. And, if you want to be a really good driver, collecting all this data in different cars.

Now we humans seem to have the ability to abstract away this underlying machinery to some degree, largely thanks to the wiring of our neo cortex, so after the age of 5 you can start to interpolate where unknown data points might lie on some spectrum given a few reference data points. But even then it won't be intuitive until we've done the exercise many times with different data.


I think when dealing with physical modeling of this sort, obviously we're going to need lots of examples to get a full enough picture of reality. I play ping-pong, so I have to be really cognizant of this, as my mind has to pattern match the way the ball works hundreds of times in a game.

But if I truly had to see the pattern on every single angle, every single type of spin, every different paddle, every different table, every different room, then the sheer combinatorial complexity would make ping pong impossible to improve at. Talking about and coming up with insights on better play would also be impossible.

It's relatively easy to break the symbolic continuities that the brain stores with new types of inputs. That doesn't mean those symbolic continuities don't actually exist and that the brain will constantly seek them out.


I don't understand the argument here. Obviously you need some sort of amplification process to avoid noise. But there are amplifying processes that don't follow the "discrete" Shannon model. Like, I'm pretty sure neurons themselves are an example -- they maintain a stable voltage (even with noise) until they depolarize and spike. The process of depolarization is fundamentally analog, integrating together all the synaptic inputs (and anything else that affects the voltage in the cell). It can't be modeled using a sequence of discrete symbols, but it's also stable over time in the presence of noise.


>It can't be modeled using a sequence of discrete symbols, but it's also stable over time in the presence of noise.

Did you mean can be?

When you learned to be integrate continuous functions, I'm certain you learned to do it while scratching sequences of discrete symbols onto paper.


Using discrete symbols to describe a continuous model is different than using a discrete model. In this case, the paper is arguing that a particular discrete model based on sequences of discrete symbols (Shannon information theory) applies to the problem, not an arbitrary model including things like integrals.


I have a feeling, based on years of reading peer reviewed material, that the brain can store quantum information or something mathematically similar. Go ahead and down vote me all the way to the loony bin if you'd like... Or read this article: https://www.quantamagazine.org/a-new-spin-on-the-quantum-bra...

There have also been empirically successful applications of quantum theoretical ideas to cognitive studies:

https://en.m.wikipedia.org/wiki/Quantum_cognition?wprov=sfla...

So while I appreciate that the article is taking into account the plausibility of "both" possibilities, it strikes me to be an uncannily "classical" question.


Computers can store structures designed to be mathematically similar to quantum information too (see [0] for example). It doesn't mean that computers can efficiently perform quantum computations. If there's evidence that humans can efficiently solve problems in BQP [1] complexity class, like cracking RSA encryption, I'd like to hear it.

[0] http://quantum-studio.net/ [1] https://en.wikipedia.org/wiki/BQP


I don't know of an example from BQP, but 3d protein folding is NP-complete and humans have outperformed computers at protein folding. Although I don't know how you'd even directly compare humans and computers. I suppose that's a big part of what this article is grappling with.

The interesting thing to me is not if humans can solve exactly the same problems as quantum computers with lots of qubits (Quantum computers are not the identical to all of quantum theory!) The interesting question to me is if humans can hold and pass around irreducibly probabilistic states without collapsing them.


Contemplation of brain cloning and consciousness creates a feeling of paradox, then no-cloning theorem comes to mind. Right?


If it is indeed your line of reasoning, I don't understand how it is supposed to work. If the quantum state doesn't influence macroscopic behavior, then it's useless. If it does, then it will collapse.


I was being unclear. I'm wondering if the brain might hold probabilistic states for extended ("macroscopic") periods of time so that they can be observed/collapsed later, but not immediately.


A glass of water contains quantum information, too, doesn't mean it obviates that information other than by being just a glass of water.


The general hypothesis could be made nearly certain by a savant able to mentally factor sufficiently large numbers, or break elliptic curve cryptography. The only not-quantum possibility would be the unlikely case that these problems are solvable classically.


That's an interesting link — a priori it seems quite implausible because the brain is so warm and most quantum computers run near absolute zero.

> it strikes me to be an uncannily "classical" question.

Even if the brain is quantum, it's still a valid question. Some quantum degrees of freedom are discrete, like spin, while others like position are continuous.


I guess my point is that discrete observables can have continuous wave functions.


Thanks for the quant article. I would not be surprised at all if consciousness ends up being a quantum phenomenon. I say that as a cogneuro guy, but with no training in quantum mechanics, so pinch of salt and all.


My feeling is it's something more advanced than Quantum Theory we haven't discovered yet ;-) Like when humans had no clue about radioactivity and once we were able to measure it, many things started to make sense.


This topic is admittedly way over my head, but the analogy that comes to mind for me is audio recording and transmission. Back in the day, it was done with entirely analog systems, yet FM radio produced an intelligible output despite being bathed in a sea of noise.

Granted, after enough transcriptions, or if stored over a long enough period, the signal would begin to get washed out... kind of like the information in my brain.


That analogy doesn’t work. Radio is not in a sea of noise due to frequency. Or I should say that unless there are other sources on a particular frequency, noise will not affect communication.


It seems we are using terms meant for manufactured items that probably don't apply to the brain. Neurons do what neurons do, regardless of the label. You can model them (in sufficient approximation) using both analog and digital means. If forced to classify neurons with "analog versus digital", I'd say they resemble our analog machinery more than our digital machines. A cup of coffee may change the "calculations" our brain makes, which is not a typical feature of our digital machines (unless they have what we'd call a defect).


Probably discrete, but a related question is what constitutes the discrete element? Science just discovered that it's not just neuron synapses as previously believed. Recent discoveries with snails show that reflexive 'memories' are transferrable from one snail to another via injection, so our synapse only model was likely incomplete for at least simpler organisms: https://www.cnn.com/2018/05/17/health/snail-memory-rna-scien...


Is there any abstraction of Deep Learning on discrete domains? Using discrete calculus from Concrete Mathematics perhaps? Or better a mixed discrete/continuous one? I guess optimization there would be killer...


There are many different ways to represent discrete domains in deep learning. One example is entity embeddings such as word2vec which turns a finite list of entities (words in word2vec) into a number of continuous variables.


I know. Though if you want to represent complex DAGs or do some inner combinatorics, it doesn't generalize that well. DeepWalk and its variations are only for toy-sized graphs.


> signifies a major demarcation from the current understanding of the brain’s physiology

There s always been a debate if the brain uses a (discrete) spike code or a more analog rate code.

Then there 's also this paper that found 26 levels of synapse strengths in total https://elifesciences.org/articles/10778


continuity is not a set property, but a function property, a function can satisfy the continuity condition.

sets can satisy a dense-ness property


This posits a very mechanistic view of human cognition, assuming that the emergent parts of our thoughts can we reduced to discrete or continuous numbers. Thoughts, information, knowledge are probably both continuous and discrete at the same time. Cognition is a biologically emergent process, not a number manipulation exercise.

To that end Shannon’s theory of information needs an overhaul.


It’s always continuous. We get better similarity measurements with floating points.


Floating point numbers are not continuous.


Yeah, that is important to recognize, but floating point numbers could obviously sample continuous measurements with less loss than integers.


Only if the measurements span several orders of magnitude and you want to keep the relative error low. Otherwise you'd be better off with fixed-point numbers, which are spread out more evenly (about half of the IEEE floats are less than 1 in magnitude).

And if you don't limit yourself to linear mappings, you can probably do even better with a non-uniform integer encoding.


Only on the human-readable level, which, when your brain is as big as mine is, doesn’t even like matter.


0 or 1 for discrete values when it comes to pattern recognition in vector similarity comparisons. Ask the guys that developed AutoClass at JPL to identify clusters of stars.


Continuous, but if digital systems are limited by bit count precision, analog systems are limited by SNR, distortions and "inertia".

How the brain deals with it is the interesting part (hence why our "adversarial examples" are different from Artificial Neural Network ones)


Is there uberhaupt anything in the physical universe that's continuous ?


Why a paper on information processing in the brain doesn't mention grid cells http://www.scholarpedia.org/article/Grid_cells


No. (Betteridge's law of headlines)


Clever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: