A really simple approach we took while I was working on a research team at Microsoft for predicting when AGI would land was simply estimating at what point can we run a full simulation of all of the chemical processes and synapses inside a human brain.
The approach was tremendously simple and totally naive, but it was still interesting. At the time a supercomputer could simulate the full brain of a flatworm. We then simply applied a Moore's law-esque approach of assuming simulation capacity can double every 1.5-2 years (I forget the time period we used), and mapped out different animals that we had the capability to simulate on each date. We showed years for a field mouse, a corvid, a chimp, and eventually a human brain. The date we landed on was 2047.
There are so many things wrong with that approach I can't even count, but I'd be kinda smitten if it ended up being correct.
To be pedantic, I would argue that we aren't even close to being able to simulate the full brain of a flatworm on a supercomputer at anything deeper than a simple representation of neurons.
We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.
It depends on what kind of simulation you're trying to run, though. You don't need to perfectly model the physically moving heads and magnetic oscillations of a hard drive to emulate an old PC; it may be enough to just store the bytes.
I suspect if you just want an automaton that provides the utility of a human brain, we'll be fine just using statistical approximations based on what we see biological neurons doing. The utility of LLMs so far has moved the needle in that direction for sure, although there's still enough we don't know about cognition that we could still hit a surprise brick wall when we start trying to build GPT-6 or whatever. But even so, a prediction of 2047 for that kind of AGI is plausible (ironically, any semblance of Moore's Law probably won't last until then).
On the other hand, if you want to model a particular human brain... well, then things get extremely hairy scientifically, philosophically, and ethically.
We have almost no idea what biological neurons are doing, or why. At least we didn't when I got my PhD in neuroscience a little over 10 years ago. Maybe it's a solved problem by now.
I'm referring to the various times biological neurons have been (and will likely continue to be) the inspiration for artificial neurons[0]. I acknowledge that the word "inspiration" is doing a lot of work here, but the research continues[1][2]. If you have a PhD in neuroscience, I understand your need to push back on the hand-wavy optimism of the technologists, but I think saying "almost no idea" is going a little far. Neuroscientists are not looking up from their microscopes and fMRI's, throwing up their hands, and giving up. Yes, there is a lot of work left to do, but it seems needlessly pessimistic to say we have made almost no progress either in understanding biological neurons or in moving forward with their distantly related artificial counterparts.
Just off the top of my head, in my lifetime, I have seen discoveries regarding new neuropeptides/neurotransmitters such as orexin, starting to understand glial cells, new treatments for brain diseases such as epilepsy, new insight into neural metabolism, and better mapping of human neuroanatomy. I might only be a layman observing, but I have a hard time believing anyone can think we've made almost no progress.
It made a big step forward, imagery is more powerfull now and some people are starting to grow organoids made of neurons. There is a lot to learn, but as soon as we can get good data, AI will step in and digest it I guess.
The vast majority of the chemical processes in a single cell are concerned with maintaining homeostasis for that cell - just keeping it alive, well fed with ATP, and repairing the cell walls. We don't need to simulate them.
If you have any evidence to the contrary, I would love to hear it because it would upend biology and modern medicine as we know it and we'd both win a Nobel prize.
As long as it's modern scientific evidence and not a 2,300 year old anecdote, of course.
The role of astrocytes in neural computation is an example. For a long time, the assumption was that astrocytes were just "maintenance" or structural cells (the name "glia" comes from "glue"). Thus, they were not included in computational models. More recently, there is growing recognition that they play an important role in neural computation, e.g. https://picower.mit.edu/discoveries/key-roles-astrocytes
> Neurons do not work alone. Instead, they depend heavily on non-neuronal or “glia” cells for many important services including access to nutrition and oxygen, waste clearance, and regulation of the ions such as calcium that help them build up or disperse electric charge.
That's exactly what homeostatisis is but we don't simulate astrocyte mitochondria to understand what effect they have on another neuron's activation. They are independent. Otherwise, biochemistry wouldn't function at all.
> they showed in live, behaving animals that they could enhance the response of visual cortex neurons to visual stimulation by directly controlling the activity of astrocytes.
Perhaps we're talking past each other, but I thought you were implying that since some function supports homeostasis, we can assume it doesn't matter to a larger computation, and don't need to model it. That's not true with astrocytes, and I wouldn't be surprised if we eventually find out that other biological functions (like "junk DNA") fall into that category as well.
> Perhaps we're talking past each other, but I thought you were implying that since some function supports homeostasis, we can assume it doesn't matter to a larger computation, and don't need to model it. That's not true with astrocytes, and I wouldn't be surprised if we eventually find out that other biological functions (like "junk DNA") fall into that category as well.
I was only referring to the internal processes of a cell. We don't need to simulate 90+% of the biochemical processes in a neuron to get an accurate simulation of that neuron - if we did it'd pretty much fuck up our understanding of every other cell because most cells share the same metabolic machinery.
The characteristics of the larger network and which cells are involved is an open question in neuroscience and it's largely an intractable problem as of this time.
I very likely was incorrect about the chemical processes, so thank you for clarifying. This is remembering work for a decade ago, so I'm almost certainly wrong about some of the details.
> We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.
While I believe there are some biological processes that rely on engagement and such, they haven’t been found in the brain. So likely somewhere just above the molecule level (chemical gradients and diffusion timings in cells certainly have an effect).
Who knows! I'm sure it depends on how accurately you want to simulate a flatworm brain.
I think current AI research has shown that simply representing a brain as a neural network (e.g. fully connected, simple neurons) is not sufficient for AGI.
Estimates of GPT-4 parameter counts are ~1.7 trillion, which is approximately 20-fold greater than the ~85 billion human neurons we have. To me this suggests that naively building a 1:1 (or even 20:1) representation of simple neurons is insufficient for AGI.
The human brain is the only thing we can conclusively say does run a general intelligence, so, its the level of complexity at which we can say confidently that its just a software/architecture problem.
There may be (almost certainly is) a more optimized way a general intelligence could be implemented, but we can't confidentally say what that requires.
If something else could replace humanity at intellectual tasks we would say it is generally intelligent as well. Currently there is no such thing, we still need humanity to perform intellectual tasks.
The definition of an 'intellectual task' used to mean 'abstract from experience' (Aristotle) or 'do symbolic processing' (Leibniz). Computers can now do these things - they can integrate better than Feynman, distinguish 'cat' vs 'dog' pictures by looking at examples, and pass the MCAT and LSAT better than most students, not to mention do billions of calculations in one second. And we have moved the goalpost accordingly.
Another approach: the adult human brain has 100 (+- 20) billion or 10^11 neurons. Each neuron has 10^3 synapses and each synapse has 10^2 ion channels, amounts to 10^16 total channels. Assuming 10 parameters is enough to represent each channel (unlikely), that's about 10^17 (100 quadrillion) total parameters. Compare that to GPT4 which is rumored to be about 1.7*10^12 parameters on 8x 80GB A100s.
log(10^17/10^12)/log(2) = 16.61 so assuming 1.5 years per doubling, that'll be another 24.9 years - December, 2048 - before 8x X100s can simulate the human brain.
I assume simulation capacity takes into account the data bandwidth of the processing systems. It seems we are always an orders of magnitude or two behind in bytes/words per second to feed simulations compared to raw flops. When you consider there are multiple orders of magnitude more synapses between neurons than neurons (not to mention other cell types we are only beginning to understand) -- bandwidth limitations seem to put estimates about 10-15 years past computation estimates. By my napkin math, accounting for bandwidth limitations, we will get single-human-intelligence hardware capabilities 2053-2063. Whether or not we've figured out the algorithms by then is any guess. Maybe algorithm advances will reduce hardware needs, but I doubt it because computational complexity to solve hard problems is often a matter of getting all the bits to the processor to perform all the comparisons necessary. However, the massive parallelism of the brain is a point of optimism.
Your approach will eventually work, no doubt about it, but the question is whether the amount of energy the computer uses to complete a task is less than the energy the equivalent conglomeration of humans use to complete a task.
It seems clear at this point that although computers can be made to model physical systems to great degree, this is not the area where they naturally excel. Think of modeling the temperature of a room, you could try and recreate the physically accurate simulation of every particle and its velocity. We could then create better software to model the particles on ever more powerful and specific hardware to model bigger and bigger rooms.
Just like how thermodynamics might make more sense to model statistically, I think intelligence is not best modeled at the synapse layer.
I think the much more interesting question is what would the equivalent of a worm brain be for a digital intelligence?
Is there something to read about simulating a worm brain. Neurons aren't just simply on and off? They grow and adapt physically along with their chemical signals. Curious how a computer accounts for all of that.
Just going to follow up and say “I don’t think that statement is even remotely true now, much less back then”. We haven’t accurately simulated any life forms. Failure to simulate C. Elegans is notable.
I got this survey; for the record I didn't respond.
I don't think their results are meaningful at all.
Asking random AI researchers about automating a field they have no idea about means nothing. What do I know about the job of a surgeon? My opinion on how current models can automate a job I don't understand is worthless.
Asking random AI researchers about automation outside of their area of expertise is also worthless. A computer vision expert has no idea what the state of the art in grasping is. So what does their opinion on installing wiring in a house count for? Nothing.
Even abstract tasks like translation. If you aren't an NLP researcher who has dealt with translation you have no idea how you even measure how good a translated document is, so why are you being asked when translation will be "fluent"? You're asking a clueless person a question they literally cannot even understand.
This is a survey of AI hype, not any indication of what the future holds.
Their results are also highly biased. Most senior researchers aren't going to waste their time filling this out (90% of people did not fill it out). They almost certainly got very junior people and those with an axe to grind. Many of the respondents also have a conflict of interest, they run AI startups. Of course they want as much hype as possible.
This is not a survey of what the average AI researcher thinks.
Thank you for this comment. It is great to hear an inside take.
Idle curiosity, but what NLP tools evaluate translation quality better than a person? I was under the (perhaps mistaken) impression that NLP tools would be designed to approximate human intuition on this.
Their results are also highly biased. Most senior researchers aren't going to waste their time filling this out (90% of people did not fill it out). They almost certainly got very junior people and those with an axe to grind. Many of the respondents also have a conflict of interest, they run AI startups.
The survey does address the points above a bit. Per Section 5.2.2 and Appendix D, the survey had a response rate of 15% overall and of ~10% among people with over 1000 citations. Respondents who had given "when HLMI [more or less AGI] will be developed" or "impacts of smarter-than-human machines" a "great deal" of thought prior to the survey were 7.6% and 10.3%, respectively. Appendix D indicates that they saw no large differences between industry and academic respondents besides response rate, which was much lower for people in industry.
> Idle curiosity, but what NLP tools evaluate translation quality better than a person? I was under the (perhaps mistaken) impression that NLP tools would be designed to approximate human intuition on this.
This is a long story. But, your take on this question is what the average person who responded to that survey knows. And it shows you really how little the results mean. Here are some minutia that really matter:
1. Even if you measure quality with people in the loop. What do you ask people? Here's a passage in English, one in French, do you agree? Rate it out of 10? Turns out people aren't calibrated at all to give reasonable ratings, you get basically junk results if you run this experiment.
2. You can ask people to do head to head experiments. Do you like translation A more than translation B? But.. what criteria should they use? Is accuracy what matters most? Is it how would they translate? Is it how well A or B reads? Is it how well it represents the form of the source? Or the ideas of the source?
3. Are we measuring sentences? Paragraphs? Pages? 3 word sentences "give me grool" are pretty easy. 3 page translations get tricky. Now you want to represent something about the style of the writer. Or to realize that they're holding something back. For example, it can be really obvious in French that I'm holding back someone's gender, but not obvious at all in English. What about customs? Taboos? Do we even measure 3 pages worth of translation in our NLP corpora? The respondents have no idea.
4. There are even domain-specific questions about translations. Do you know how to evaluate English to French in the context of a contract? One that goes from common law to civil law? No way. You need to translate ideas now, not just words. How about medical translation? Most translation work is highly technical like this.
I could go on. Mostly we don't even measure minutia about translations or domain-specific translation in our NLP benchmarks because the tools aren't good enough for that. Nor do we measure 5 page translations for their fidelity.
We actually mostly don't measure translations using humans at all! We collect translations from humans and then we compare machine translations to human translations after the fact, with something called parallel corpora (the historical example is the Hansard corpus; which is the proceedings of the Canadian parliament that are manually translated in English and French; the EU has also been a boon for translation research).
I'm scratching the surface here. Translation is a really complicated topic. My favourite book related to this is the Dictionary of Untranslatables https://press.princeton.edu/books/hardcover/9780691138701/di... Not something you'd read end-to-end but a really fun reference to dip into once in a while.
If someone who knows about these issues wants to say that there will be human-level translation AI in 10 years, ok, fine I'm willing to buy that. But if someone who is ignorant of all of this is trying to tell me that there will be human level AI for translation in 10 years, eh, they just don't know what they're talking about. I am by the way a translation visitor, I've published in the area, but I'm not an expert at all, I don't even trust my opinion on the subject of when it will be automated.
About biases, I saw appendix A and D.
Seniority doesn't mean >1000 citations. There are master's students with 1000 citations in junk journals who happened to get a paper in a better venue. Number of citations is not an indication of anything.
The way they count academia vs industry is meaningless. There are plenty of people who have an affiliation to a university but are primarily at a startup. There are plenty of people who are minor coauthors on a paper, or even faculty who are mostly interested in making money off of the AI hype. There are plenty of people who graduated 3 years ago, this is a wrap-up of their work, they counted as academic in the survey, but now they're in industry. etc.
Maybe I'm too pessimistic, but I doubt we will have AGI by even 2100. I define AGI as the ability for an intelligence that is not human to do anything any human has ever done or will do with technology that does not include itself* (AGI).
* It also goes without saying that by this definition I mean to say that humanity will no longer be able to meaningfully help in any qualitative way with respect to intellectual tasks (e.g. AGI > human; AGI > human + computer; AGI > human + internet; AGI > human + LLM).
Fundamentally I believe AGI will never happen without a body. I believe intelligence requires constraints and the ultimate constraint is life. Some omniscient immortal thing seems neat, but I doubt it'll be as smart since it lacks any constraints to drive it to growth.
> I define AGI as the ability for an intelligence that is not human to do anything any human has ever done or will do with technology that does not include itself* (AGI).
That bar is insane. By that logic, humans aren't intelligent.
What do you mean? By that same logic humans definitionally already have done everything they can or will do with technology.
I believe AGI must be definitionally superior. Anything else and you could argue it’s existed for a while, e.g. computers have been superior at adding numbers basically their entire existence. Even with reasoning, computers have been better for a while. Language models have allowed for that reasoning to be specified in English, but you could’ve easily written a formally verified program in the 90s that exhibits better reasoning in the form of correctness for discrete tasks.
Even with game playing Go, and Chess, games that require moderate to high planning skills are all but solves with computers, but I don’t consider them AGI.
I would not consider N entities that can each beat humanity in the Y tasks humans are capable of to be AGI, unless some system X is capable of picking N for Y as necessary without explicit prompting. It would need to be a single system. That being said I could see one disagreeing haha.
I am curious if anyone has different definition of AGI that cannot already be met now.
The comparison of the accomplishments of one entity versus the entirety of humanity is needlessly high. Imagine if we could duplicate everything humans could do but it required specialized AIs, (airplane pilot AI, software engineer AI, chemist AI, etc). That world would be radically different than the one we know and it doesn't reach your bar. So, in that sense it's a misplaced benchmark.
I think GP is thinking that those would be AIs yes, but a A General I would be able to do them all, like a hypothetical human GI would.
I'm not saying I agree, I'm not really sure how useful it is as a term, seems to me any definition would be arbitrary - we'll always want more intelligence, it doesn't really matter if it's reached a level we can call 'general' or not.
(More useful in specialised roles perhaps, like the 'levels' of self-driving capability.)
> Imagine if we could duplicate everything humans could do but it required specialized AIs
Then those AIs aren't generally intelligences, as you said they are specialized.
Note that a set of AIs is still an AI, so AI should always be compared to groups of humans and not a single human. Since the AI needs to replace groups of humans and not individuals, very few workplaces has individual humans doing tasks alone without talking to coworkers.
I believe the best case scenario is one where humans have all of our needs met and all jobs are replaced with AI. Money becomes pointless and we live in a post-scarcity society. The world is powered by clean energy and we become net-zero carbon. Life becomes pointless with nothing to strive toward or struggle against. Humans spend their lives consuming media and entertaining ourselves like the people in Wall-E. A truly meaningless existence.
Francis Fukuyama wrote in "The Last Man":
> The life of the last man is one of physical security and material plenty, precisely what Western politicians are fond of promising their electorates. Is this really what the human story has been "all about" these past few millennia? Should we fear that we will be both happy and satisfied with our situation, no longer human beings but animals of the genus homo sapiens?
It's a fantastic essay (really, the second half of his seminal book) that I think everyone should read
But are we right now or ever really happy specifically?
Happiness is always fleeting. Aren't our lives a bit dystopian already if we need to do work and for what reason? So that we can possibly feel like we are meaningful with hopes that we don't lose our ability to be useful.
Our lives are imperfect, but that doesn't make them dystopian. Some people hate their jobs, but I strongly believe that most people, especially men, would be utterly miserable if we felt unnecessary. You see this today, even, with many young men displaced from the economy and unable to find jobs or start families. A world in which humans are no longer needed in the economy will be inherently fragile, as I believe most people would go out of their way to destroy it
You're basically requiring AGI to be smarter/better than the smartest/best humans in every single field.
What you're describing is ASI.
If we have AGI that is on the level of an average human (which is pretty dumb), it's already very useful. That gives you robotic paradise where robots do ALL mundane tasks.
What is your definition for AGI that isn't already met? Computers have already been superior to average humans in a variety of fields since the 90s. If we consider intelligence as the ability to acquire knowledge, then any "AGI" will be "ASI" in short order, therefore I make no distinction between the two.
AGI must be comparable to humans' capabilities in most fields. That includes things like
• driving (at human level safety)
• folding clothes with two robotic hands
• write mostly correct code at large scale (not just leetcode problems), fix bugs after testing
• ability to reason beyond simple riddles
• perform simple surgeries unassisted
• look at a recipe and cook a meal
• most importantly, ability to learn new skills at average human level. Ability to figure out what it needs to learn to solve a given problem, watch some tutorials, and learn from that.
> I doubt we will have AGI by even 2100...Fundamentally I believe AGI will never happen without a body.
I think this is very plausible--that AI won't really be AGI until it has a way to physically grow free from the umbilical chord that is the chip fab supply chain.
So it might take Brainoids/Brain-on-chip technology to get a lot more advanced before that happens. However, if there are some breakthroughs in that tech, so that a digital AI could interact with in vitro tissue, utilize it, and grow it, it seems like the takeoff could be really fast.
I'm curious to hear your definition of AGI that hasn't already been met, given computers have been superior to humans at a large variety of tasks since the 90s.
I'm not sure these are good examples. What are the actual tasks involved? These are just as nebulous as "AGI".
I assure you computers already are superior to a human remote worker whose job it is to reliably categorize items or to add numbers. Look no further than the duolingo post that's ironically on the front page at the time of this writing with this very post.
computers have been on par with human translators at some languages since the 2010s. an hypothetical AGI is not god, it still would need exposure, similar to training with LLMs. We're already near the peak with respect to that problem.
I'm not familiar with a "hard turing test." What is that?
- Go on Linkedin or fiverr and look at the kinds of jobs being offered remote right now. developer, HR, bureaucrat, therapeut, editor, artist etc.
Current AI agents can not do the large majority of these jobs just like that, without supervision. Yes they can perform certain aspects of the job, but not the actual job, people wouldn't hire them.
A hard Turing test is a proper Turing test that's long and not just smalltalk. Intelligence can't be "faked" then. Even harder is when it is performed adversarially, i.e. there is a team of humans that plans which questions it will ask and really digs deep. For example: commonsense reasoning and long-term memory are two pureky textual tasks where LLMs still fail. Yes they do amazingly well in comparison go what we had previously, which was nothing, but if you think they are human equivalent then imo you need to play with LLMs more.
Another hard Turing test would be: Can this agent be a fulfilling long-distance partner? And I'm not talking about fulfilling like current people are having relationships with crude agents. I am talking about really giving you the sense of being understood, learning you, enriching your live etc. We can't do that yet.
Give me an agent and 1 week and I can absolutely figure out whether it is a human or AI.
> Fundamentally I believe AGI will never happen without a body
I'm inclined to believe this as well, but rather than "it won't happen", I take it to mean that AI and robotics just need to unify. That's already starting to happen.
There's a lot of cool work being done on embodied intelligence -- what makes you think that 76 years wouldn't be enough to create an embodied agent with relevant in-built constraints?
Intelligence involves self-learning and self-correction. AIs today are trained for specific tasks on specific data sets and cannot expand beyond that. If you give an LLM a question it cannot answer, and it goes and figures out how to answer it without additional help, that will be behavior that qualifies it as AGI.
by that definition what you realize is that it's the same as what I said since it can easily be reduced down to any thing any human can do, and your definition says AGI can go figure out how to do it. you extrapolate this onto future tasks and viola.
as I mention in another post, this is why I do not make any distinction between AGI and superintelligence. I believe they are the same thing. a thought experiment - what would it mean for a human to be superintelligent? presumably it would mean learning things with the least possible amount of exposure (not omniscience, necessarily).
> If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047.
Maybe I'm overly optimistic (or pessimistic depending on your point of view, I suppose), but 50% by 2047 seems low to me. That just feels like an eternity of development, and even if we maintain the current pace (let alone see it accelerate as AI contributes more to its own development), it's difficult for me to imagine what humans will still be better able to do than AI in over a decade.
I do wonder if the question is ambiguously phrased and some people interpreted it as pure AI (i.e. just bits) while others answered it with the assumption that you'd also have to have the sort of bipedal robot enabled with AI that would allow it to take on all the manual tasks humans do.
> If you mean the last year, is that pace maintainable?
That is the question, though I'd turn it around on you - over the course of human history, the speed of progress has been ever-increasing. As AI develops, it is itself a new tool that should increase the speed of progress. Shouldn't our base case be the assumption that progress will speed up, rather than to question whether it's maintainable?
I'm of the opposite opinion. I think there's some Dunning-Kruger-like effect at play on a macro scale and it's causing researchers to feel like they're closer than they are because they're in uncharted territory and can't see the complexity of what they're trying to build.
Or maybe I'm just jaded after a couple decades of consistently underbidding engineering and software projects :)
I think history has shown us that we tend to underestimate the rate of technological progress and it's rate of acceleration.
It's tempting to look at Moore's law and use say the development of the 8080, z-80 and 6502 in 1975 as an epoch. But it's hard to use that to get a visceral sense of how much things changed. I think RAM - in other words, available memory - may be more helpful, and it does relate in a distant way with model size and available GPU memory.
So the question is, if we surveyed a group of devs, engineers and computer scientists in 1975 and asked them to extrapolate and predict available RAM a few decades out, how well would their predictions map to reality?
In 1975 the Altair 8800 microcomputer with the 8080 processor had 8K of memory for the high end kit (4096 words).
8 years later, in 1983 the Apple IIe (which I learned to program on) had 64K RAM as standard, or 8 times the RAM.
13 years later in 1996, 16 to 32 MB was fairly commonplace in desktop PCs. That's 32,768K which is 4096 times the 8K available 21 years earlier.
30 years later in 2005, it wasn't unusual to find 1GB of RAM or 1,048,576K or 131,072 times 8K from 30 years earlier.
Is it realistic to expect a 1975 programmer, hardware engineer or computer scientist to predict that available memory in a desktop machine will be over 100,000 times greater 30 years in the future? We're not even taking into account moving from byte oriented CPUs to 32bit CPUs, or memory bandwidth.
2054 is 30 years in the future. It's going to fly by. I think given the unbelievable rate of change we've seen in the past, and how it accelerates, any prediction today from the smartest and most forward thinking people in AI will vastly underestimate what 2054 will look like.
> I think history has shown us that we tend to underestimate the rate of technological progress and it's rate of acceleration.
It's also been overestimated tons of times. Look at some of the predictions from the past. It's been a complete crap-shoot. Many things have changed significantly less than people have predicted, or in significantly different ways, or significantly more.
Just because things are accelerating great pace right now doesn't really mean anything for the future. Look at the predictions people made during the "space age" 1950s and 60s. A well-known example would be 2001 (the film and novel). Yes, it's "just" some fiction, but it was also a serious attempt at predicting what the future would roughly look like, and Arthur C. Clarke wasn't some dumb yahoo either.
The year 2001 is more than 20 years in the past, and obviously we're nowhere near the world of 2001, for various reasons. Other examples include things like the Von Braun wheel, predictions from serious scientists that we'd have a moon colony by the 1990s, etc. etc. There were tons of predictions and almost none of them have come true.
They all assumed that the rate of progress would continue as it had, but it didn't, for technical, economical, and pragmatic reasons. What's the point of establishing an expensive moon colony when we've got a perfectly functional planet right here? Air is nice (in spite of what Spongebob says). Plants are nice. Water is nice. Non-cramped space to live in is nice. A magnetosphere to protect us from radiation is nice. We kind of need these things to survive and none are present on the moon.
Even when people are right they're wrong. See "Arthur C Clarke predicts the internet in 1964"[1]. He did accurately predict the internet; "a man could conduct his business just as well from Bali as London" pretty much predicts all the "digital nomads" in Bali today, right?
But he also predicts that the city will be obsolete and "seizes to make any sense". Clearly that part hasn't come true, and likely never will. Can't "remotely" get a haircut, or get a pint with friends, or all sorts of other things. And where are all those remote workers in Bali? In the Denpasar/Kuta/Canggu area. That is: a city.
It's half right and half wrong.
The take-away is that predicting the future is hard, and that anyone who claims to predicts the future with great certainty is a bullshitter, idiot, or both.
> What's the point of establishing an expensive moon colony when we've got a perfectly functional planet right here?
I think this is the big difference between what you're describing and AI. AI already exists, unlike a moon colony, so we're talking about pushing something forward vs. creating brand new things. It's also pretty well established that it's got tremendous economic value, which means that in our capitalist society, it's going to have a lot of resources directed at it. Not necessarily the case for a moon colony whose economic value is speculative and much longer term.
AI exists, yes, but "future AI" that "will be possible soon" that I hear many people talk about on HN doesn't exist. Yet. Maybe it will exist some day. Or maybe not. Or maybe it will take 40 years instead of half a year.
That was really my point: you can't really predict what the future will bring based on what we can do today. People were extrapolating from "we've got fancy rockets and space satellites" to "moon base" in the past, and now they're extrapolating from "GPT-4" to "GPT-5 will replace $thing soon" and even "major step towards AGI". I don't think you can make that assumption.
I'm also somewhat skeptical on the economic value, but that's a long argument I don't have time to expand on right now, and this margin is too narrow to contain it.
Got any historical examples? I lived through it from 1984 onwards as a young programmer and can't think of any, other than perhaps the dot-com bust (I was at eToys) which didn't impede the rate of technological progress much - only the rate of return.
Very interesting, especially the huge jump forward in the first figure and a possible majority of AI researchers giving >10% to the Human Extinction outcome.
To AI skeptics bristling at these numbers, I’ve got a potentially controversial question: what’s the difference between this and the scientific consensus on Climate Change? Why heed the latter and not the former?
We have extremely detailed and well-tested models of climate. It's worth reading the IPCC report - it's extremely interesting, and quite accessible. I was somewhat skeptical of climate work before I began reading, but I spent hundreds of hours understanding it, and was quite impressed by the depth of the work. By contrast, our models of future AI are very weak. Something like the scaling laws paper or the Chinchilla paper are far less convincing than the best climate work. And arguments like those in Nick Bostrom or Stuart Russell's books are much more conjectural and qualitative (& less well-tested) than the climate argument
I say this as someone who written several pieces about xrisk from AI, and who is concerned. The models and reasoning are simply not nearly as detailed or well-tested as in the case of climate.
Amazing response, thanks for taking the time - concise, clear, and I don’t think I’ll be using that comparison again because of you. I see now how much more convincing mathematical models are than philosophical arguments in this context, and why that allows modern climate-change-believing scientists to dismiss this (potential, very weak, uncertain) cogsci consensus.
In this light, widespread acknowledgement of xrisk will only come once we have a statistical model that shows it will. And at that point, it seems like it would be too late… Perhaps “Intelligence Explosion Modeling” should be a new sub-field under “AI Safety & Alignment” — a grim but useful line of work.
FAKE_EDIT: In fact, after looking it up, it sorta is! After a few minutes skimming I recommend Intelligence Explosion Microeconomics (Yudkowsky 2013>) to anyone interested in the above. On the pile of to-read lit it goes…
A climate forcing has a physical effect on the Earth system that you can model with primitive equations. It is not a social or economic problem (although removing the forcing is).
You might as well roll a ball down an incline and then ask me whether Keynes was right.
Ha, well said, point taken. I’d say AI risk is also a technology problem, but without quantifiable models for the relevant risks, it stops sounding like science and starts being interpreted as philosophy. Which is pretty fair.
If I remember an article from a few days ago correctly, this would make the AI threat an “uncertain” one, rather than merely “risky” like climate change (we know what might happen, we just need to figure out how likely it is).
EDIT: Disregarding the fact that in that article, climate change was actually the example of a quintessentially uncertain problem… makes me chuckle. A lesson on relative uncertainty
Wait I gotta defend my boy Keynes here. His predictions have been as nearly as well validated as predicting the outcome of a ball rolling down a plank. Just reading the first part of the General Theory correctly predicted the labor strikes in 2023. Keynes’ very clear predictions continue to hold up under empirical observation.
I would think any scenario where humans actually go extinct (as opposed to just civilization collapsing and the population plummeting, which would be terrible enough) has to involve a lot of social and economic modeling...
I think human extinction due to climate change is extremely unlikely. However, civilization collapsing is bad enough. We can't be certain whether or not that will actually happen but we do know it will if we do nothing. We even have a pretty good idea when, that's not too late yet, and we have an actionable scientific consensus about what to do about it.
In many ways AI risk looks like the opposite. It might actually cause extinction but we have no idea how likely that is and neither do we have any idea how likely any bad not-quite-extinction outcome is. The outcome might even be very positive. We have no idea when anything will happen and the only realistic plan that's sure to avoid the bad outcome is to stop building AI, which also means we don't get the potential good outcome, and there's no scientific consensus about that (or anything else) being a good plan because it's almost impossible to gather concrete empirical evidence about the risk. By the time such evidence is available, it might be too late (this could also have happened with climate change, we got lucky there...)
I often think about this from a standpoint of curiosity. I am simply curious about how the universe works, how information is distributed across it, how computers use it, and how this all connects through physics. If I'm soon to be greeted by an AI friend who shares my interests then that's a welcome addition to my friend and colleagues circle. I'm not really sure why I wouldn't continue pursuing my interests simply because there's someone better at doing it then me. There are many people that I know better at this than me. Why not add a robot to the mix?
Does anyone know potential causal chains that bring about the extinction of mankind through AI? Obviously aware of terminator, but what other chains would be possible?
One potential line is a general purpose ai like chat gpt that can give instructions on how to produce genetically engineered viral weapons, for example. I find this improbable, but it's possible that a future LLM (or whatever) gets released that has this capability but isn't know to have that capability. Then you might have a bunch of independent actors making novel contagions in their garage.
That would still require a lot of equipment potentially but it's there.
Another possibility would be some kind of rogue agent scenario where the program and hide and distribute itself on many machines, and interact with people to get them to do bad things or give it money. I think someone already demonstrated one of the LLMs doing some kind of social engineering attack somewhere and getting the support agent to let them in. Not hard to imagine some kind of government funded weapon that scales up that kind of attack. Imagine whole social movements, terrorist groups, or religious cults run by an autonomous agent.
To borrow a phrase from Microsoft's history, "Embrace, Extend, Extinguish." AI proves to be incredibly useful and we welcome it like we welcomed the internet. It becomes deeply embedded in our lives and eventually in our bodies. One day, a generation is born that never experiences a thought that is not augmented by AI. Sometime later a generation is born that is more AI than human. Sometime later, there are no humans.
People wouldn't even get a vaccine injection because there were supposedly Bill Gates's microchips in them. Now you expect they'll be flocking to get these because they have Bill Gates's microchips in them?
Actually, I could see that happening.
Maybe it's time to give AGI a chance to run things anyway and see if it can do any better. Certainly it isn't a very high bar.
I'm going to take that to mean "P(every last human dead) > 0.5" because I can't model situations like that very well, but if for some reason (see Thucydides Trap for one theory, instrumental convergence for another) the AI system thinks the existence of humans is a problem for its risk management, it would probably want to kill them. "All processes that are stable we shall predict. All processes that are unstable we shall control." Since humans are an unstable process, and the easiest form of human to control is a corpse, it would be rational for an AI system that wants to improve its prediction of the future to kill all humans. It could plausibly do so with a series of bioengeneered pathogens, possibly starting with viruses to destroy civilization then moving on to bacteria dropped into water sources to clean up the survivors (as they don't have treated drinking water anymore due to civilization collapsing). Don't even try with an off switch, if no human is alive to trigger it, it can't be triggered, and dead man's switches can be subverted. If it thinks you hid the off switch it might try to kill everyone even if the switch does not exist. In that situation you can't farm, because farms can be seen from space (and an ASI is a better analyst than any spy agency could be), you can't hunt because all the animals are covered inside and out with special anti-human bacteria, natural water sources are also fully infected.
If the AGI - which is for some reason always imagined as a singular entity - thinks humans are unpredictable and risky now, just imagine the unpredictability and risk involved in trying to kill all seven billion of us whilst keeping the electricity supply on...
It would have to prepare to survive a very large number of contingencies (preferably in secret) and then execute fait accompli with high tolerance to world model perturbations. It might find some other way to become independent from humans (I'm not a giga-doomer like Big Yud, ~13% instead of >99%, though I think he overstates it for (human) risk management reasons), but the probability is way too high to risk it. If a 1% chance of an asteroid (or more likely a comet, coming in from "behind" the sun) killing everyone is not worth it, neither is that same percentage for an AGI/ASI. I don't see the claimed upside unlike a lot of people, so it's just not worth it on cost/benefit.
Edit: it's usually described as a single entity, because barring really out-there decision theory ideas, they're more of a risk to each other than humans are to them. It's not "well, if instrumental convergence is right, and they can't figure out morality (i.e. orthogonality thesis)", it's "almost certain conflict predicted".
This argument gives a 35% chance of AI "taking over" (granted this does not mean extinction) this century: https://www.foxy-scout.com/wwotf-review/#underestimating-ris.... The argument consists of 6 steps, assigning probabilities to each step, and multiplying the probabilities.
If they accelerate the burning of fossil fuels, extract and process minerals on land and in the ocean without concern for pollution, replace large areas of the natural world with solar panels, etc., the world could rapidly become hostile for large creatures.
An ocean die out as a result of massive deep sea mining would be particularly devastating. It's very hard to contain pollution in the ocean.
Same for lakes. And without clean water things will get bad everywhere.
Ramping up the frequency of space launches a few orders of magnitude into the solar system for further resources could heavily pollute the atmosphere.
Microbes might be fine, and be able to evolve to changes, for much longer.
That's really a characteristic of being 20 (with the "this" being different things of different people). Good for everybody that manages to keep the feeling later as well, but it is definitely easier at 20...
It seems wild to me that the 50% prediction for "unaided machines outperforming humans in every possible task" is 2047, but for "all human occupations becoming fully automatable" is 2116. That's multiple generations of people going to work (or _training_ to go to work) knowing that a machine could do it better. Someone born after machines can outperform humans at _everything_ could go to school, work a series of jobs, and reach retirement age, if the median predictions are right. What would have to be true of human institutions or the economy for that to be true?
I seriously feel crazy when reading stuff like this. At best this seems to describe the statistics of the consensus of the researchers - lacking any of the social, legal, political, and economic drivers of progress for each prediction. Also, how are these "tasks" or "jobs" actually defined (and how is the believability of each respondent considered when aggregating predictions as it relates to each)? What am I missing?
It's interesting data on what AI researchers think, but why should we think that AI researchers are going to have the most accurate predictions about the future of AI? The skills that make someone a good AI researcher aren't necessarily the same skills that make someone a good forecaster. Also, the people most likely to have biased views on the subject are people working within the field.
The approach was tremendously simple and totally naive, but it was still interesting. At the time a supercomputer could simulate the full brain of a flatworm. We then simply applied a Moore's law-esque approach of assuming simulation capacity can double every 1.5-2 years (I forget the time period we used), and mapped out different animals that we had the capability to simulate on each date. We showed years for a field mouse, a corvid, a chimp, and eventually a human brain. The date we landed on was 2047.
There are so many things wrong with that approach I can't even count, but I'd be kinda smitten if it ended up being correct.