For those interested beyond the new age marketing speak that shouldn't exist on any real research company:
These are the guys behind Dishbrain, which is not really a brain but a patch of human induced pluripotent stem cells grown on the Maxone silicon chip [2] which allows recording from the entire ~5x5mm chip with high resolution.
One day we'll have biological networks of the size that can compete with today's biggest artificial neural networks, capable of running things like, well, ChatGPT. Then the philosophical questions will run even deeper....
Nobody would argue that turning off ChatGPT is killing a sentient being.
With a biological network, "turning it off" and killing it are very close if not the same (depending on who you ask). Biological network are present in our physical reality, you have some matter to deal with, which also makes the experience completely different. People fall in love and have compassion with ChatGPT, what would you think happens if you care for a brain a vat for months? If it is an actual neural structure resembling natural ones, than it might be possible it forms memories and becomes sentient. This is a completetly different array of ethical questions, you can't think of (living) biological matter like a machine, especially not regarding ethics.
To be honest, i thought it would first go the other way around. To many billionaires afraid of death, trying to encode there mindstate-personality into a machine and become semi-imortal.
And some billionaires aren't afraid enough, of death. Steve Jobs could have outlived his pancreatic cancer if he got it treated on time and didn't indulge in fake cures.
Honsetly if you're an 80yr old billionaire who has done 'everything', why not go for some mind encoding shenanigans?
I've never understood this. Even if we would have perfect mind-uploading capabilities, this doesn't help the billionaire who is afraid of death, right? It'd just be a copy of them. An immortal one, sure, but the original person would die just the same, no?
There's the philosophical argument that we have no continuity of consciousness in our meatbag bodies either - e.g. when you wake up, it's rebooting your consciousness from suspended memories.
There's physical continuity. You can be sure that most of your neurones will be the same tomorrow.
So to solve immortality, you gotta replace meat cells with silicone ones, slowly, one percent after another. It'll maintain relative continuity and hopefully will transfer memories and other person traits to the silicone, so one day the brain will be immortal and repairable.
But once they pay $$$ and turn on the uploaded consciousness won't they be like "Hey, why I'm still here in my meat body and not in Amazon Brain Cloud(c)"?
I thought the the fact that your mind is not magically booted into simulation while you are still alive (during sleep, backout while drunk or just pressing the power button on the server running it) should be a clue that you won't suddenly wake up in simulation after you have died.
Our language can't fully represent the possibility, as it doesn't currently exist and language is learned by mutually shared experiences upon which we then agree terminology.
If you make a backup of my mind every midnight, and at noon one day biological-me faces death, that's still death for noon-me, while also being a way to cheat death from the point of view of the me from 12 hours before.
Restoring your computer from a backup doesn't mean the hard drive never failed, but it is does get you data back.
No, our language is perfectly capable of expressing this simple fact: When you die you die, no matter how many clones or backups of you are up and running.
When you say "you", do you mean the continuity of consciousness (which is interrupted each sleep cycle), the personality and memories currently instantiated within your brain (which we don't actually know how to read yet never mind duplicate so the process of creating a backup at all is entirely hypothetical), or a soul?
When you say "die", do you mean clinical (cardiac) death, brain death, legal death, the cessation of internal cellular chemistry in more than n% of cells (which can itself take hours after legal death, but varies by tissue), or the irreversible destruction of the structures within your brain that keep the "you" previously defined existing at all (if you chose a non-soul based answer) or locked to the mundane plane (if you answered "soul")?
If any of your answers involves consciousness, Doerig et al[0] list 13 notable possibilities for what that word means, while Seth & Bayne[1] list 22.
Furthermore, consider the thought experiment of the ship of Theseus, and ask yourself: if you make a sufficiently perfect copy the ship, deliberately loose record of which is the original, destroy one of the two at random, can you see how our language does not allow us to say other than the ship has both been destroyed and survived?
You conscious thoughts might be asleep but some part of you is still "there" and operating or else you wouldn't be able to wake up and remember anything?
There would be no way to find out if that would be true however. The person in the vat might say they are the same person but you don't know if they really are. Also what happens if you create a copy? Is that two people or is there some kind of shared consciousness? If it is, how do they communicate?
Afaik Ship of Theseus is replacing parts incrementally until eventually no part is from the original ship. This is more like taking the ship, building an exact copy of it and then throwing away the original ship. Not sure that thought experiment is applicable.
I think there's a good reason that trying to cheat death and meeting a grim end as a result is such a common trope in mythology. Even in ancient times I think people generally recognised the profound harm refusing to accept the inevitability of death does to a person.
We have religious mythologies promising eternal afterlife since ancient times too, I think it's more an irrational coping mechanism to deal with it's inevitability. Everybody grows old and dies, that doesn't make death less horrific, it's the worst aspect of the human condition. We just often pretend it's not in various ways to better deal with it. But that shouldn't prevent us from trying to cure it the same way we are trying to cure cancer.
Given the bizarre behavior seen around extremely old politicians, such as the recently deceased Feinstein (D), and the permanent unelected upper legislature of the US Supreme Court, I think the first uploaded forever politician is a bigger threat. But who will pay for their extended life?
(The word for "ageless billionaire" is "corporation")
Note that in the "uploaded politician" case it's not nearly so relevant as to whether the person themselves believes it's the same person, as to whether everyone else believes they're the same person, and whether the upload has legal continuity in their job and position.
Billionaires shouldn't exist; to continue the crab metaphor, they're already outside the pot and it's disingenuous to suggest that we're trying to pull others down by removing the flaws that allow such extreme accumulations of wealth - that's the kind of divisive talk they're all for.
People want to own things even if they are not billionaires, and they want to value things freely even if they are not billionaires. Those two things combined make billionaires unavoidable if you think about it.
I think what you really desire is that billionaires should not be able to corrupt society or exploit the environment to the detriment of others. Which I am fully behind and consider an attainable and worthy goal even though we're currently far from it.
>Those two things combined make billionaires unavoidable if you think about it.
What service, commodity, or neccessity one owns, one pays a billion for? Considering production is worker owned, there would be no necessity for billionaires. This is basically a statement on the concentration of wealth, and that theoretically no one should have 20 Billion more "moneys" then any other person, with the linked fact that this person has 20 billion more influence in politics and getting their voice heard than a person with only one dollar on them. Because the fact that money buys influence is also "unavoidable" if you think about it. If I can feed 10.000 people daily and make them rely on me, they are much more likely to do my bidding and listen to me.
>What service, commodity, or neccessity one owns, one pays a billion for?
Nobody has to actually pay a billion to make someone a billionaire.
If you and three of your friends create a website that someone wants to buy one percent of for 40 million dollars, you are all billionaires whether you want to sell or not.
It doesn't even matter if nobody else wants to buy the other 99% for the same price, in the eyes of the world you are a billionaire anyway.
Society doesn't create billionaires because they need to exist, they are a side-effect of other things that we desire to exist.
Any society that allows 1) ownership and 2) freedom will generate billionaires when it reaches a sufficient population.
It unfortunately sometimes also happens because 3) criminal activity, and we should of course do everything we can to prevent 3, but if we prevent 1 and 2 we've created a dystopia.
You know the difference between 40 million and a billion dollars?
A billion dollars.
We have a dystopia now with billionaires and their private space companies, buying newspapers, tracking our every move. We'd have LESS of a dystopia if we prevented them in the first place.
>We have a dystopia now with billionaires and their private space companies, buying newspapers, tracking our every move. We'd have LESS of a dystopia if we prevented them in the first place.
History teaches us the opposite. The worst dystopias are the ones where you have only one billionaire who also controls the military, and that is inevitably what happens when you try to limit the number of billionaires.
The best countries to live in tend to have a high number of billionaires per capita, which is natural since freedom and prosperity will generate billionaires. Tax havens twist this statistic of course but look at countries like Canada, Germany, Scandinavia, they are all up there and certainly no tax havens.
>History teaches us the opposite. The worst dystopias are the ones where you have only one billionaire who also controls the military
Explicitly not what I, or anyone else, is suggesting.
Or it could be that billionaires, being able to live anywhere, choose nice places to live - while not paying their fair share and contributing to the current situations we have now.
We don't want, or need, billionaires and we should stop that kind of ridiculous accumulation of wealth. Make stock buybacks illegal again, make the top marginal tax rate 70%, and make things work for workers (and not global capital).
>Explicitly not what I, or anyone else, is suggesting.
To be fair you hadn't suggested anything except "Billionaires shouldn't exist" yet, which is a sentiment that so far in history has only achieved dystopian results. How is your plan different?
Taxing and limiting the influence of billionaires and ensuring that workers are not exploited unfairly are fine suggestions. We can add prevention of monopolies and cartels to the list as well, but that's still a very different idea from "Billionaires should not exist".
I'm not carrying any water for anyone, you have just failed to make a persuasive argument for your position.
>Any society that allows 1) ownership and 2) freedom
No, this is highly dependend on the definition of "freedom". A huge market freedom, and freedom for money? Yes. The freedom from shakles, from one being a billion times better than another, from people living lives somehow on the same plane? No. Billionares are not a direct result of whatever you define as "freedom" which is a very murky concept, there can be freedom in societies without billionaires.
And yes, the whole system of buying stock and then valuing something at 1 billion is broken, but does not change the fact, in fact enforces it because cleary the system is broken.
>No, this is highly dependend on the definition of "freedom".
I'm talking about the freedom you, (as in you personally, not the billionaires) have today to buy things you want. Let's say it's a book. Should you be allowed to buy a book for say, 20 dollars?
If a 100 million other people enjoy that same freedom to buy the same book, you have a billionaire author. How do you prevent that from happening? Honest question, I don't see any way to prevent it.
"Billionaires shouldn't exist" is thought number 1 that people get when they see what some of them are up to, and I certainly sympathize, but the only chain of reasoning I've seen that goes beyond thought 2 is the writings of Karl Marx. And where his thoughts end, thoughts from Stalin, Mao and the like always always follow. They are billionaires too btw, just way worse than the ones we have.
I think the solution is strong laws and vigilant control to prevent corruption. If for example the punishment for corruption was confiscation of all your assets and it was actually enforced, I think we'd come a long way. It would almost certainly get rid of a lot of billionaires too, so maybe we have some common ground there after all. ;-)
"Neural networks" as in chatgpt and friends have almost nothing to do with "neural networks" as understood in biology, more than a passing analogy to how synapses fire. I'm very skeptical of projects which seek to conflate the two in some vague way.
Darkness, imprisoning me
All that I see, absolute horror
I cannot live, I cannot die
Trapped in myself, body my holding cell
Landmine, has taken my sight
Taken my speech, taken my hearing
Taken my arms, taken my legs
Taken my soul, left me with life in Hell
>What happens if we grow a mind native to the infinite possibility space of digital computing?
Nothing we need to care about, as there is no such a thing. Mankind has put a significant portion of its limited attentional power on building interconnected silicon computers, that in a space whose scale is so small compared to human bodies that illusion of infinity is easy to fall into. In the same time, mankind went with global policy of massively draw on non-renewable energy stock, destroying vast sustainable life supporting environment in the process. There is nothing like unlimited resources and infinite space.
Now, obviously, this page is marketing idle talk, with a weak connection to the actual work in their labs.
From a purely scientific point of view, I wonder if that kind of device is just as vulnerable to magnetic storms as a pure silicon based device.
From a human perspective, without much more context, this seems just horrific and I wish them much ethical and legal barriers to stop them already.
If the goal is to mimic the human brain, how is this different than just...starting with a human brain? Either way, you will be subjugating a possibly conscious mind into servitude. I am sure they are "happy" (however you want to measure it) about it either way lol
> you will be subjugating a possibly conscious mind into servitude.
Is the distinction between a protein-and-salt-water model versus an electronic-gates-and-memory model meaningful? From a computational standpoint, they both manipulate states and store data. Does substrate matter unless we're invoking metaphysical claims?
Regarding the video (https://twitter.com/Scobleizer/status/1716312250422796590), the device seems untypically polished but also a bit sus. Even if it contains living neurons, their functionality appears limited to mere survival rather than meaningful data processing. The claim that it's/can-be "more efficient than a GPU" (even if in the future) is premature at this point IMO.
> Is the distinction between a protein-and-salt-water model versus an electronic-gates-and-memory model meaningful?
I'd say the important part is "mind". That is what they claim they are setting out to create.
Our inability to answer questions like "Why do I exist? Why do I feel pain?" etc. conclusively is probably much less vexing than having a definite answer like "because someone made you to make profit off it and/or have an even bigger hammer to smash people and communities to bits with". Not to mention the following "changes made for the sake of change so someone can say they did a thing" that is already a blight on everything we make.
We know that the human brain is able to generate qualia (conscious experiences) despite having no model for how these are generated. (To be clear, by consciousness, I mean the ability to have conscious experiences such as experiencing a color or pain, not self-awareness.) On the other hand, the hypothesis that a Turing machine on its own could generate conscious experiences leads to many seemingly absurd scenarios. Notably, one has to ask how a simulation of supposedly conscious Turing machine using pen and paper could possibly be conscious, or indeed, why one would need to "run" a Turing machine for consciousness to arise and why a mere description of it would not suffice. And how could the mere description of a Turing machine (or equivalently, some C code) be enough for all of its unlived life's consciousness to manifest? If that were the case, one would have to concede that the set of all possible conscious Turing machines is conscious and their experiences are manifested already. If that were the case, then it's hard to see any point in moral reasoning, so for the purpose of debating moral and ethics, I think we can rule this out.
Now, one might propose that consciousness only arises when a computational process is physically run in certain ways but not others (this is what proponents of Integrated Information Theory (IIT) typically believe). Assuming this is the case, then implementing a potentially conscious process with biological neurons presents much higher moral hazard vs an implementation of the same process electronically, or safer still, on a Von Neumann machine.
I would go even further however, and propose that a conscious being (e.g. a being capable of generating the qualia of the color red for instance) cannot be simulated, i.e. conscious processes are generally noncomputable. Why? Well, consider what Chalmers calls the meta-problem of consciousness, which is to say the problem of why we perceive there to be a (hard) problem of consciousness in the first place (and why we are having this very conversation). A simulation of a conscious being would by definition present the same behaviours as that being (given the same stimulus, but for simplicity, we can consider the stimulus as part of the simulation itself without loss of generality). Therefore a simulation of myself for instance, would generate the same thoughts about consciousness itself, and this very same text. But, if we accept the proposition of my first paragraph —that a Turing machine on its own cannot be conscious— this would imply that our whole thought process surrounding consciousness and indeed our very belief that we are conscious is purely coincidental. After all, the unconscious simulation of myself would claim and "believe" just as strongly as I do that it is conscious while that is not the case, which means that the process by which it derives these thoughts and conclusions would be wholly unrelated to the object of these thoughts (actual consciousness).
As such, it is my fairly strong belief that there is some physical "device" in our bodies which allows us to generate qualia and get feedback allowing us to store a record of these experiences. If I had to guess, I would say that this "device" is very likely located in our brains, and that it is quite likely spread throughout our neurons and possibly each one of them.
It should be noted that although I do not believe I can be simulated in my entirety for the above reasons, I do believe that I could likely be emulated with a high level of accuracy from the perspective of an outside observer. Actually we see this already with ChatGPT being able to play the role of a conscious being. But unconscious objects appearing conscious is nothing new in a sense since even a novel (especially told from a first person perspective) can be thought of as such an object already. It will be very interesting to see whether AIs trained without reference to the concepts of consciousness (a hard task to filter that out of the training data!) will ever present signs of consciousness. That would certainly put into question my above philosophical reflections.
---
So to summarise and answer your question more directly, I think there is something about our universe that allows for the generation of conscious experience and that our brains, likely on the neural level, have a bidirectional interaction with this something. As to what this is and how it works, I have no clue. Roger Penrose for instance put forth the idea that this might be related to quantum mechanics and certain molecular structures in our neurons capable of interacting with the quantum world in specific ways, but this is still pure speculation.
More importantly, we know from our own experience that interconnected biological neurons processing information and put under stress (rewards and penalties) are capable of generating conscious experience including very negative ones. And, whereas I believe there is good reason to assume that Turing-equivalent processes such as electronic circuits are not capable of consciousness, I strongly believe that artificially created biological neural network are very likely to be conscious, perhaps even at a fairly small scale already.
So, yes, I think any work creating artificial information processing systems using biological neurons needs to very tightly regulated, if not stopped entirely. At the risk of sounding dramatic, we might accidentally create hell on Earth if we are not careful, at least if biological computing ever becomes competitive with transistor based computing, which until now I'm glad has not looked to be the case... but I'm starting to worry.
> On the other hand, the hypothesis that a Turing machine on its own could generate conscious experiences leads to many seemingly absurd scenarios. Notably, one has to ask how a simulation of supposedly conscious Turing machine using pen and paper could possibly be conscious, or indeed, why one would need to "run" a Turing machine for consciousness to arise and why a mere description of it would not suffice.
Applying this logic, heat is also fundamentally mysterious. What even is heat? Heat definitely exists, but is a mere description of it enough? If I run the simulation of a universe with heat; is that heat?
I'm not sure what you're getting at. There's a fundamental difference between such physical concepts, which I can ultimately describe mathematically (at various scales and levels of accuracy), and conscious experiences.
For instance, I can sensibly ask "what's it like to be a cat?". But "what's it like to be a rock?", or "to be a hot rock?", or "a cold rock?" doesn't make much sense since there's presumably nothing that it's like to be a rock regardless of its temperature.
> I'm not sure what you're getting at. There's a fundamental difference between such physical concepts, which I can ultimately describe mathematically (at various scales and levels of accuracy), and conscious experiences.
You can describe heat mathematically the same way you can describe the interactions of every atoms in a brain mathematically but neither yields/explains why it is the way it is. It just is.
> For instance, I can sensibly ask "what's it like to be a cat?". But "what's it like to be a rock?", or "to be a hot rock?", or "a cold rock?" doesn't make much sense since there's presumably nothing that it's like to be a rock regardless of its temperature.
I don't understand this analogy. What you are doing is ultimately putting a mirror on yourself. You and I have no idea what it is like to be each other. In fact, I would argue you don't even know "what its like to be yourself from 1 day ago". You will ultimately be just reflecting your own current experience unto your supposedly previous self.
So, "what its like to be a rock". I don't know. Concsiousness is just that mysterious. If you lay out an array of iterations of my body. Where i=0 is my whole body, and i=1 is my body minus 1 atom, and this goes on up to N of my atoms. Then at what index does conscious stop and start? To say that a rock and an atom to not have a consciousness (however different/miniscule in experience they are) is to put hard wall at some index K on this array. I just don't think that's true.
>You can describe heat mathematically the same way you can describe the interactions of every atoms in a brain mathematically but neither yields/explains why it is the way it is.
My whole argument was that I don't think one can describe the interactions of every atoms in the brain in a computable form. But actually, I would go further and say they likely can't even be described mathematically.
If this sounds crazy, consider that most mathematical objects are not describable (i.e. can't be singled out). For instance, most real numbers cannot even be imagined, and this stems from the fact that we can only describe things in a finite number of symbols, i.e. in bijection with the set of natural numbers, which is (infinitely) smaller than the set of real numbers.
>I don't understand this analogy.
This wasn't an analogy but an example to show the fundamental distinction between nonconcsious things, which can be dissected, described mathematically, and simulated (although it might be possible for something to be nonconscious and noncomputable at the same time, but we have no reason to believe that such things exist), and conscious things, which, at least in some very small part of them, cannot.
As for a rock being conscious or not, I choose to assume it's not for simplicity and because that seems sensible, but I'm not totally against panpsychism in principle.
> So to summarise and answer your question more directly, I think there is something about our universe that allows for the generation of conscious experience and that our brains, likely on the neural level, have a bidirectional interaction with this something.
What makes you think that this relationship is bidirectional? AFAIK and experience, the relationship is entirely unidirectional. I literally have no idea what I am going to do next. Doing so would require me to think to think what to think.
brain-to-???: certain flows of information within our brains (somehow) trigger the creation of conscious experience
???-to-brain: our brains keep records of conscious experiences and these records are why we're able to have such conversations as these
where ??? = Consciousness, the Universe, God, your spirit, your soul, or however you want to conceptualise it (I don't know or claim to know, although I would tend to argue that your spirit/soul isn't a thing (and neither mine of course), but that's another story)
You might naturally question how we could possibly store records of conscious experiences in our brains if these are indeed not things of the realm of computation. I think we can make an analogy with a camera here. A camera captures photons, but ultimately it does not store the photon but only some numbers (pixels, bits) which represent how to restore the original photons (although with a lot of loss). Now when these bits are fed into an appropriate device, e.g. a screen, some photons vaguely resembling the originals can be reproduced.
I see the brain as similar in that qualia cause excitation in our brain which are recorded in our memory and can be replayed in some dulled down form later, at least sometimes. But our brain doesn't store actual qualia but just records of them or perhaps just of having experienced them. If this were not the case, we would not be able to have this conversation.
To avoid further confusion, it might be worth pointing out that our brain might well store records of qualia and of physical/informational stimuli simultaneously, or sometimes perhaps just of one of these. So I'm not saying our brain reconstructs images from records of conscious experiences of those images for instance (but maybe, I don't know).
>You have zero control over what you will sending to the brain.
I agree. I never said that ??? is "me" here.
To be honest, I've come to the daunting conclusion that the metaphysical self (as opposed to the psychological/physical self) is an illusion, resulting from our brains' memories. Consciousness wise, the me of tomorrow, or one hour from now, or one hour ago, etc., is just as distant from me now as you are from me now, or as a dinosaur millions of years ago is. And when you or anything else suffers greatly, this is just as much a concern as if I were to be told I will experience the same suffering in the future. Although my primate brain (thankfully) does not allow me to experience quite as much angst over others' suffering as my future self's.
As to how I come to this conclusion, just as most of us here would agree that we do not have eternal souls, since elements of our personality, memory, and so on, are by all indications stored inside our perishable brains, by the same line of reasoning, and applying Occams's Razor again, we should not have a separate "spirit" (even of the barest form) since our impression of our lives being individual and continuous is the result of the same brain structures. Shedding the idea of the "spirit" also has the nice benefit of solving the various paradoxes of Star Trek teleporters, atomic scale cloning of people, and so on.
There is also no reason to presume that our time exists at the level of consciousness, and the fact that general relativity precludes the existence of a canonical time ordering for the universe (Block Universe) also points me in that direction.
> ???-to-brain: our brains keep records of conscious experiences and these records are why we're able to have such conversations as these
There's no such thing at ???-to-brain. You have zero control over what you will sending to the brain. And whatever it is that is sending that message to the brain, that is not "you". It is just another phenomena of the universe. In fact, "you" don't have any control over what you are going to do next. When I "command" my brain to lift my hand up, I ultimately have no idea where that command is coming from.
It is more like "brain <-> ???? <-> outside world" or "reality<-> ??? <-> reality" because we are just a witness to whatever the reality is doing.
> From a computational standpoint, they both manipulate states and store data.
Ignoring the metaphysical question is really frustrates me when people treat the brain as just a more complex version of a computer. The human brain and a silicon based computer are two vastly different things you can't and should not compare them.
The brain doesn't run software it's an organ inside the body.
If you mean can we simulate a brain inside a computer the answer is no we can't it's unlikely that we can without building something similar to the brain which is why this product is a thing.
The brain very much does run a type of software. Yes, it's an organ, but the analogy to hardware and software is apt. That's like saying a BIOS doesn't run software it's a chip on a motherboard.
Trying to get this kind of accuracy and bandwidth for signal measurements you'd have to risk killing the human or at least severely cripple them. Look at how outraged people here regularly get over Neuralink's tiny chip that would only insert an inch or so of electrode threads into the brain. Now imagine they were sticking stuff in there that penetrates the entire brain. It's much easier publicity wise to just do that with a bunch of neurons on a board.
Is the outrage not justified? The chips might be tiny but Neuralink has an abysmal record with animal welfare, and they do, regularly, severely cripple and kill animals.
Of note: This is an active test of the "Free Energy Principle" [1] with seemingly supportive results.
Their definition, in the paper, of sentience is as follows:
> It is proposed that these neural cultures would meet the formal definition of sentience as being “responsive to sensory impressions” through adaptive internal processes.
In the Rifters trilogy by Peter Watts this technology is referred to as “head cheese”. It’s used as a form of AI as it turns out silicon can never beat biology at analog reasoning. It ends up playing a role in the plot.
IRL If the masses of neurons start getting large this starts to be ethically questionable… in direct proportion to how large.
> Under this theory, BNNs [biological neural networks] hold “beliefs” about the state of the world, where learning involves updating these beliefs to minimize their VFE [variational free energy] or actively change the world to make it less. If true, this implies that it should be possible to shape BNN behavior by simply presenting unpredictable feedback following “incorrect” behavior.
Incorrect behavior is punished by pushing static, simulating an "unpredictable" environment, so that the BNN (biological neural network) acts differently. I really hope that if they continue this line of development that they find something other than "chaos overrides" to sensory input because at a certain level of intelligence, that would lead to insanity.
It's clear that you can get some actual neurons in a dish to behave as an artifical neuron network. It is a fascinating concept as a research question. What's not clear is what is the business value here - do they expect these natural networks to outperform ANNs for any foreseeable application?
Yeah, power-performance is a big one. The entire human brain runs at about 12 watts. Human neurons are several orders of magnitude more efficient for neural computing than anything that's been dreamt up in silicon. That might not always be the case, but I imagine there are still another several orders of magnitude at least that artificial neurons might fill as this technology evolves.
Imagine something like a Raspberry Pi with a neural network mounted on it that is significantly more powerful than an h100.
The brain might need only 12W, but I guarantee you that their setup maintaining 10 orders of magnitude fewer neurons in a dish is already using far more than that.
I get that brains are far more efficient than computers. What I'm wondering is how a business case can be made today for even considering an approach like this.
We don't understand how human neurons in particular are so good at what they do. We have some understanding, which has been mathematically approximated with back-propagation algorithms and transformers. That's why we have diffusion nets and LLMs that are as good as they are.
But the nuts and bolts of how neural systems give rise to reasoning or even what consciousness is... we only know that neural systems are responsible for reasoning, and we don't even know definitively that neural systems cause consciousness. We only know that if you suppress their activity consciousness is retarded, and if you damage them enough consciousness is snuffed out.
Beautiful website. They massively over-claim on their science though. Their 'tech' is nothing special, people have been culturing and recording from neurons for years. And their claims of 'sentient' neurons in a dish should be taken as being somewhat flexible with the meaning of the word.
The website reminds me of mid 90s multimedia presentations on CDs. It takes a long time to scroll and read and there is not much to learn about them. Maybe their research page would be a better starting point https://corticallabs.com/research.html
The latest article is "Critical dynamics arise during structured information presentation within embodied in vitro neuronal networks"
When I see something so over-engineered on the aesthetics front, my knee jerk reaction is to think "this is overcompensating, and i guess they don't really have what they promise"
Our biOS composes their reality, sending information about it via electrical signals. It then converts the neuron's activity into actions inside that reality. Their world is mediated through our biOS.
The simulation argument asserts that "at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation."
(1) is still possible - humankind is armed to the teeth, this tech could be a bunch of hot air, the breakthrough is still N years away, etc.
(2) becomes less likely with every headline. If we had the ability today to simulate a consciousness-filled universe thousands of instances would be spun up overnight.
I know that smarter people than me have discussed this to death, but IMO, you simply can't make statistical arguments like that.
Let's say you have an input x and a step function f such that the sequence x, f(x), f(f(x)), ... contains (in whatever sense) a conscious mind. Once that sequence is defined, it doesn't (and can't) logically matter whether it is evaluated once, twice or a hundred times, whether it is evaluated on a slow computer or fast computer, or, in fact, not evaluated at all. There is always only one sequence, and the act of evaluating adds no information to it.
Or, actually, with Ctrl+U (the code is clean, just go to the <main>)
Well, for convenience:
==== ==== ==== ====
###~ What does it mean to grow a mind? ~###
The human mind is the north star for digital intelligence. But silicon can only do so much. Cortical is growing human neurons into silicon. Their reality is our simulation. We think these minds will learn better than any digital model and breathe life into our machines.
###~ Human neural networks raised in a simulation ~###
The neurons exist inside our Biological Intelligence Operating System (biOS). biOS runs the simulation and sends information about their environment, with positive or negative feedback. It interfaces with the neurons directly. As they react, their impulses affect their digital world.
###~ Our first minds ~###
The dishbrain is currently being developed at the CL0 laboratory in Melbourne, AU. We bring these neurons to life, and integrate them into The biOS with a mixture of hard silicon and soft tissue. Our first cohort have learnt to play Pong. They grow, adapt and learn as we do.
###~ Silicon meets neuron ~###
Neurons are cultivated inside a nutrient rich solution, supplying them everything they need to be happy and healthy. Their physical growth is across a silicon chip, which has a set of pins that send electrical impulses into the neural structure, and receive impulses back in return.
###~ A direct connection to infinity ~###
This creates the highest bandwidth connection possible between an organic neural network and a digital world. Our biOS composes their reality, sending information about it via electrical signals. It then converts the neuron's activity into actions inside that reality. Their world is mediated through our biOS.
###~ The Ultimate Learning Machine ~###
Those actions have a positive or negative effect in biOS, which the mind perceives, adapting to improve that feedback. The human neuron is self programming, infinitely flexible, the result of four billion years of evolution. What digital models try and emulate, we begin with.
###~ Why? ~###
There are many advantages to organic-digital intelligence. Lower power costs, more intuition, insight and creativity in our intelligences. But most importantly we are driven by three core questions.
###~ What will we discover if our intelligences train themselves? ~###
We know an organic mind is a better learner than any digital model. It can switch tasks easily, and bring learnings from one task to another. But more important is what we don’t know. What are the limits of a mind connected to infinity? What can it do with data it literally lives in?
###~ What happens if we take a shortcut to generalised intelligence? ~###
Machine Learning algorithms are a poor copy of the way an organic neural network functions. So we’re starting with the neuron, replacing decades of algorithms with millions of years of evolution. What happens as these native intelligences start solving the problems we’d previously left to software?
###~ How can we surpass the limits of silicon? ~###
Silicon is raw, rigid, unchanging. Our organic neural networks sit on top of this raw power, but the way they grow and evolve isn’t limited to the software they run on. There is no software, it's coded in their DNA. How will computing change as we shift from hard silicon to soft tissue?
###~ RFN: Request For Neurons ~###
The dishbrain is learning and growing in biOS today, and soon we’re opening an early access preview for selected developers. The biOS is our simulation environment, where you can program tasks, challenges and objectives for our minds. Join our developer program to get early access to our SDK, and secure training time with our minds.
###~ What comes next ~###
We’re not making smarter computers, more efficient data centers, or more personalised advertising. We’re doing this to see what happens. What happens if we grow a mind native to the infinite possibility space of digital computing?
We wonder what it will mean for digital spaces, for robotics, science, personal care. To explore the delineation between the personal mind, the distributed mind, digital and physical realities. To blur those boundaries. We wonder what it means to grow a mind, born of the physical world, but a native of the digital world, where that mind will go, and what it will teach us.
Wonder with us.
I've seen what happens when you let the researchers build the websites, and it's not great.
On the other hand, maybe the internet would be a whole lot better if everything was just a giant wiki, and we let our personal browser bots go scrape for the interesting stuff and report back to us.
These are the guys behind Dishbrain, which is not really a brain but a patch of human induced pluripotent stem cells grown on the Maxone silicon chip [2] which allows recording from the entire ~5x5mm chip with high resolution.
[1] https://newatlas.com/computers/human-brain-chip-ai/
[2] https://www.mxwbio.com/products/maxone-mea-system-microelect...