Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Paul Allen: The Singularity Isn't Near (2011) (technologyreview.com)
95 points by rblion on Dec 15, 2012 | hide | past | favorite | 89 comments


When "The Singularity Is Near" was first published, I asked Bill Gates what he thought of the book. He said "I don't know how near the singularity is, but I haven't heard any convincing argument why it won't happen". That seems like a more sensible position than the one offered by Allen in this piece. There is indeed a lot of uncertainty in the rate of progress. But Allen contorts that fact to say that the singularity isn't near. He offers three basic critiques:

First, he points out that Kurzweil is extrapolating, and extrapolations can be wrong. This seems obvious. And it doesn't do much towards Paul's "isn't".

Second, he says that software and hardware will have to keep improving, and that might not happen. Again, this seems intellectually equivalent to saying "you might be wrong". No evidence is provided that progress will slow.

Third, he says that the singularity will require either a bottom-up biologically inspired model, or a non-biologically inspired "AI" system.

The complexity of the former will take a long time to overcome. Since Allen concedes that sufficient computational power is already here, he seems to be arguing that it will take over a hundred years for us to have a detailed model of the human brain. Looking at the progress of science, this seems terribly conservative, yet little justification is offered. Allen's posited "complexity brake" seems positioned tendentiously — why are we going to start hitting it now, instead of fifty years ago?

As for the AI route, he writes: "But when we step back, we can see that overall AI-based capabilities haven’t been exponentially increasing either, at least when measured against the creation of a fully general human intelligence." What does it even mean to be exponential relative to a binary variable? He is right to say that current methods are limited, and don't achieve generalized intelligence, but offers no reason why that won't happen. His argument seems equivalent to simply saying it hasn't happened yet.

This piece offers little intellectual contribution, and the only reason we're reading it piece is because of the author. It reads like he started from the conclusion he wanted, and threw some paragraphs in that direction. His argument, essentially, is that he's betting against continued progress in science, software, and hardware. That just seems crazy.


"The complexity of [a bottom-up, biologically-inspired model of the brain] will take a long time to overcome. Since Allen concedes that sufficient computational power is already here, he seems to be arguing that it will take over a hundred years for us to have a detailed model of the human brain."

Given that Paul Allen donated $100 million in 2003 (and pledged an additional $300 million in 2012) to the Allen Brain Atlas[1] and Allen Institute for Brain Science[2], he probably knows more than most people about this. He certainly isn't betting against continued progress.

[1] - http://en.wikipedia.org/wiki/Allen_Brain_Atlas

[2] - http://en.wikipedia.org/wiki/Allen_Institute_for_Brain_Scien...


Yet throwing money at sth does not make him an expert. Neuroscience is growing by leaps and bounds, ask any neuroscientist you know. It is indeed peculiar that he has a pessimistic view of it, but i would bet the best neuroscientists do not share that view.


You might want to check in with some neuroscientists. If you take offense at personal attacks at Kurzweil's sanity and intelligence you might want to ... prepare yourself. Imagine if a retired doctor started doing speaking tours about the future of the internet and computing and completely disagreed with the general consensus on HN about what could plausibly happen. Imagine what the comments about him on HN would be.

They would view Allen's response as "not delusional" rather than pessimistic. I'm often surprised by how widespread singularity fans think belief in it is.

Outside the tech industry, where people have spend their entire lives in a field that has seen no increasing exponential returns (other than problems solved purely by computation, which they consider a wonderful thing but often not a fundamental advancement to their field) there is very very little belief that "the singularity is near" or that kurzweils utopian interpretation of the singularity is correct.


Neuroscience, as a science, might be growing by leaps and bounds but if you ask the average neuroscientist how much the field currently knows about how the brain works I think the answer will be a great deal more modest. It's very early days still I think. Not to mention the fact that nobody understands how and why consciousness works, which seems to me a key feature in eclipsing human intelligence with AI.


Job prospects for neuroscientists are completely bleak. Which is a weird dynamic of "need" vs "supplied" (or, will pay/fund). It's rather sad.


( I don't believe Allen ever actually conceded the point you seem to wish to attribute to him, even after multiple readings. )

There's every reason to believe that things aren't going to get faster or better--CPU development seems to be petering out compared to the gains of yesteryear. Worse, the products that drive those sales, and the markets that drive those products, seem utterly uninterested in anything beyond simple apps and hardware-as-appliance.

The thing a lot of people seem to forget is that the market is what drives this stuff, and that in the absence of market forces not very much happens. This is the world we live in.


CPU development seems to be petering out compared to the gains of yesteryear

This is arguably true if your primary metric is single-threaded performance on branchy-integer-type code. Admittedly this is what normal users see most of the time, which is why consumer desktops and laptops haven't been very exciting the last few years. But the cost to build a petaflop supercomputer keeps dropping.


Sure, but the cost to solve a petaflop problem is not. We hit a point where the bottleneck is no longer flops but being able to organize systems that can use them effectively.

Doesn't matter how fast you think we will overcome this problem, it illustrates to many people that the intersection between the world that is not increasing exponentially and chip speeds starts to matter a great deal when you want to actually solve a problem by moving floating point numbers about.


I think the biggest test of these sorts of processing predictions will come with our transition away from traditional transistor tech. Whether we switch to circuits on graphene, replace transistors with memristors, or adopt quantum computing doesn't matter, according to singularity proponents. Our ability to process information should continue to exponentially improve. I believe, as you said, that we are due for such a switch the 5-10 year timescale because current CPU development is reaching the end of the line. Whether or not we can seamlessly switch to a new way of computing will make or break many of these singularity predictions.


Just curious, in what context did you ask Bill Gates that?


Allen's mistake regarding AI is a failure to understand that it moves exponentially by leaps, and by gradual progression that enables the big leaps.

As an example, what comes out of Siri & similar AI programs over the next ten years will ultimately be viewed as a leap, driven by mass consumer demand. But Siri only exists due to thousands of small progressions that might have seemed trivial by themselves.


Thank you, Paul, for speaking out against this cult.

It's very hard for me, as a programmer, to take this idea of a singularity seriously. I know how all of the technologies that Kurzweil is banking on work at a very low level and I call bullshit that this is ever going to happen. It would require a type of software that simply does not exist yet, something akin to a self-programming software, and we are so far away from that that it might as well be cold fusion.

I think it's much more valid to call the "singularity" the point in time when technological expansion started occurring at an exponential rate. Thus I would put the singularity at about 200 years in the past, right around the time the spinning jenny was invented, and right before the industrial revolution. Now there's a point in time that I can point to and say "something meaningful happened". This is complete pie in the sky stuff, and I'm extremely cynical that it's not just something that Kurzweil talks about to sell books and conference tickets.


Just a point that should be made in any discussion about the "Singularity": Kurzweil's isn't the only model/definition[1] of the Singularity, and some high-profile Singularitarians don't have a high opinion of Kurzweil[2] ("I've come to the conclusion that Kurzweil's worldview prohibits Kurzweil from arriving at any real understanding of the basic nature of the Singularity").

> I think it's much more valid to call the "singularity" the point in time [...]

Isn't this just redefining the word "singularity", and so making the discussion about something entirely different? So it might be better to qualify that with "industrial singularity" and the one under consideration here is the "technological singularity".

[1]: http://yudkowsky.net/singularity/schools

[2]: http://www.sl4.org/archive/0206/4015.html


I've really got nearly zero respect for Kurzweil as a theorist. He made a bunch of predictions for 2010 in his 1999 book The Age of Spiritual Machines. When 2010 came around he graded his predictions and gave himself very high marks, but when I found a copy of the book and read what he'd actually written I found that he'd had to re-write his predictions substantially in order to count them as succeeding.

As far as I can tell the only way to become popular as a futurist is to lay out predictions for the future in far more detail and with far more certainty than could every be justified.


Well we have at least a hint on how to go about it software-wise. Jürgen Schmidhuber has been working on recursively self improving universal problem solvers for decades now:

http://www.idsia.ch/~juergen/goedelmachine.html


It's... rather odd to redefine a term in use so that it's very different from what everybody else in the discussion talking about. Also, exponential economic growth has been with us since the rise of homo sapiens, but what we saw 200 years ago was a drastic shortening of the doubling period. You can also look at the invention of agriculture as another increase in the rate of doubling, and talk about how ~human level AI might cause another decrease in the doubling period. This is, at least, how Robin Hanson thinks about these issues.

http://hanson.gmu.edu/longgrow.pdf


I think we have gone through many singularity points in history. In terms of the history of the planet (or even life on the planet), the last 10,000 or so years show an incredibly fast rate of progress.


Be sure to read Kurzweil's well-argued response too: http://www.technologyreview.com/view/425818/kurzweil-respond...

(imho, more convincing than Allen's).


One of Kurzweil's arguments in his response is "...the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome"

However the brain's design information is the genome, the laws of physics and a universe.

You only have to look at protein folding, and the complexity of the resulting molecule and how that interacts with the rest of the world, so see how much complexity is there outside of the genome - and it's that complex end result that you have to simulate or replicate.

He goes onto say that "..the amount of design information in the genome is about 50 million bytes, roughly half of which pertains to the brain. That’s not simple, but it is a level of complexity we can deal with and represents less complexity than many software systems in the modern world."

As above, this is greatly simplifying the end result that you're trying to replicate - the genome is just the software that's running on the OS/hardware of the universe. To replicate that virtually we'd also need a virtual universe.


The virtual brain could use the real world, just as the real brain is. Just like the Google cars and Watson are doing.


The main complication of protein folding is how the amino acid chain interacts with itself, not its interactions with an external environment.


That proves the point. To be able to fold a simple protein, you have to be capable of simulating the very challenging simulation of "real world physics at an atomic scale".

To simulate a brain, you get to do that for a few hundred trillion proteins that make up a brain...and all their interactions with all the other trillion proteins.

The complexity doesn't come from the proteins so much as the physics that govern the proteins.


It's certainly longer and more flowery, but a lot of it upon even the shortest reflection rings hollow:

  And while the translation of the genome into a brain is   
  not straightforward, the brain cannot have more design 
  information than the genome
How could a Game of Life grid possibly have more state or require more storage than the ruleset for generating it! Gasp! This is just sloppy on his part.

  We do need to understand in detail how individual types 
  of neurons work, and then gather information about how 
  functional modules are connected. The functional methods 
  that are derived from this type of analysis can then 
  guide the development of intelligent systems.
I suspect this reflection would be about as useful in developing new intelligent systems as meditation on the semantics of various MIPS instructions would be for developing new processor implementations. He's looking at the wrong level.

  The Google self-driving cars (which have driven over 
  140,000 miles through California cities and towns) learn 
  from their own driving experience as well as from Google 
  cars driven by human drivers.
A vacuous statement--to what degree does a Kalman filter "learn" from its state estimates of speed or similar? His statement may well be technically accurate, but its explanatory power is wanting.

~

All the buzzwords and nerd feelgood talking points are there, but I'm very put off by the random anecdotes and clumsy assertions.


How could a Game of Life grid possibly have more state or require more storage than the ruleset for generating it! Gasp!

It can't have more state. The initial conditions have just as much state as any later generation. The initial state is {off, off, off, off, off, off, ... }. That's just as many bits as any later stage.

What you really mean is that the initial state is easily compressible. (Run-length encoding can express an entire empty grid in two numbers as length, value.) And that's pretty much what Kurzweil means, actually the Game of Life is a great analogy here. The genome is a compressed data set, which doesn't express or exhibit anything useful in itself, but can be decompressed using external inputs of organic matter and energy to procedurally generate the neurons that will form the intelligence.


It can't have more state. The initial conditions have just as much state as any later generation. The initial state is {off, off, off, off, off, off, ... }. That's just as many bits as any later stage.

That encoding is only viable if you restrict the game to a finite grid.


While true, the infinite case is of no interest here: a brain is finite. (Many infinite grids require an infinite amount of information to store, and clearly many of our assumptions break down there.)


The initial conditions in the game of life isn't normally all-off, because then every generation would be all-off.

In any case, the amount of information in the game of life is merely (initial state, number of iterations) which is certainly finite for a finite board.


There's a lot of good in his response, but I do take issue with two of the concepts:

    > How do we get on the order of 100 trillion connections 
    > in the brain from only tens of millions of bytes of 
    > design information? Obviously, the answer is through 
    > redundancy.
No. That's called algorithmically generated complexity. It's where the function generates data that's more complex than the function itself, because the function (DNA in this case) is fitted explicitly to match that output. This isn't a complicated idea; stop avoiding the issue. The brain is complicated.

    > Combining human-level pattern recognition with the inherent 
    > speed and accuracy of computers will be very powerful.
No kidding. If we had 'human level pattern recognition' this whole thing would be a lot easier wouldn't it? ...but that doesn't just magically appear from no where.

The reality is that most progress follows a log normal curve (http://en.wikipedia.org/wiki/File:Lognormal_distribution_CDF...); it starts linear, and ends flat. Some times it starts flat, looks like its going exponential and then goes flat... but it always ends up flat.

The trick is, does the singularity happen after we hit the flat curve (which means it's going to be some nebulous time in the future that may never arrive) or does it happen while we're still on the 'exponential acceleration' curve (which means it may happen in-this-lifetime).

I don't see us on an exponential increase path for cpu power, cost reduction or algorithmic strength at the moment.

If anything I see us as passed the main hump of growth and into the 'slowdown' phase for all of these.

So, I'm dubious of this rebuttal, which is basically endorsing the stupid 'growth for ever! exponential growth! yay! growth!' idea. -_-


If you use a Kolmogorov measure of complexity, the complexity of a function and/or its output is defined to be the description of that function using some universal language. So a function cannot, by definition, generate data that is more complex than the function itself.

Pi can be described by a relatively small formula/function. It can generate very long sequences of seemingly random digits. The complexity in all of those infinity digits is still just that of the simplest formula that generates Pi.


Poor phrasing on my part.

Pi is a great example.

The full length of Pi cannot be compressed using traditional means into the same number of bytes that the algorithm that generates Pi can exist in.

The point is that a function can generate a dataset that is complex, without redundant information in the data. A dataset that cannot be reduced to a smaller dataset.

The idea that a complex function can only generate a simple data set with 'high redundancy' is obviously nonsense.

My point being that the OP is suggesting the DNA that is the functional generator for the brain generates highly redundant physical structures because there is a limited amount of information in the functional generator (DNA).

While the DNA may generate redundant or repeated structures, it does so (if it does) because of the function. Not because of the information density of the DNA.

As your example of Pi shows, a trivial information density can generate complex datasets without repetition.


I don't think your argument about "traditional means of compression" holds.

The existence of redundant data can be proven by the ability to store the same information with less data.

Pi's information density is in fact no larger than the small function that generates Pi's digits -- even if traditional compressors fail to identify it.


  A classical example is the laws of thermodynamics [...]
I'm not sure what is argument is here. The laws of thermodynamics are scientific laws because we can test them and repeat them in a laboratory. This is untrue of Kurzweil's LOAR as far as I understand the requirements to deem something a scientific law [1].

[1] http://en.wikipedia.org/wiki/Scientific_law


My first thought when I saw the title was that Paul Allen has some patent and obtained an injunction against the singularity.

Somehow I cannot bring myself to think about the whole idea (of singularity, economics of emulated minds, etc) seriously. Whenever Robin Hanson (of Overcoming Bias[1]) writes about it, I usually skip the page, even though his ideas are new and insightful. I also read some stuff from Eliezer Yudkowsky (of Less Wrong[2]) and again it's a great concept, he even wrote some cool stories about it[3], but despite all the emphasis on correcting own biases and other follies of the mind, feels way too much like some sort of techno, cyberpunk religion from the 80s.

[1] http://www.overcomingbias.com/tag/future

[2] http://lesswrong.com/about/

[3] http://lesswrong.com/lw/qk/that_alien_message/


I fear that one of the worst things religion did to people is to make them think that every hope of doing something amazing and making the world a better place is insane. People are afraid of great dreams, because they think it's religion. They think low of singularity-related stuff just because they pattern-match long life (and/or immortality) and superhuman intelligence with what religions promise and talk about. They shouldn't. Living longer, better, improving ourselves and extending our capabilities beyond what is currently possible is not faith-stuff; it is the dream of mankind at large, and we're building our technology to achieve that.


The best thing that religion did is show modern man the dangers of choosing faith over evidence. People don't call the singularity a religion because they want the world to be better or want to live forever or know more. Everyone wants that. They call it a religion because people believe this will happen in their lifetimes because they really want it to. They have faith and have shown no evidence. It's not about the belief in improvement, it's about the belief in prediction of a specific kind of improvement and a prediction of a timeline.


I would upvote this a million times if I could. The whole concept of the "Singularity" confuses hardware with software.

Yes, hardware technology has followed an exponential rate of improvement, and there's no obvious reason to believe that will stop.

But software certainly hasn't. Programmers today still deal with the same issues they dealt with 30 years ago. Separately, we've figured out some cool pattern-recognition techniques, but there's absolutely nothing indicating "exponential" growth in how smart our programs are.

Yet the "Singularity" depends primarily on software, not hardware. And all this talk about getting around it by just "scanning in" actual brains and simulating them on hardware... well that still doesn't show how those simulated brains are suddenly going to get smarter than our own.

(And then you've got a whole lot of human-rights and personhood issues when you start dealing with actual seemingly conscious brains running on silicon, with real childhood memories, real emotional desires and whatnot -- I mean, they would basically be actual people, not some kind of abstract AWS brainpower cluster...)


I'm not sure that the software/hardware break is as clear cut as that:

http://bits.blogs.nytimes.com/2011/03/07/software-progress-b...


Kurzweil makes the same point in his rebuttal.

But all he (and the article) is talking about is speed improvements to algorithms. Singularity-type AI is not about the speed, and it's not about complexity in terms of number of moving parts either (a million-line code base might be larger than a 10,000-line one, and more complicated, but not necessarily any more conceptually complex).

What matters for the "Singularity" is conceptual improvements. We're still writing brittle computer code that crashes a compiler upon a single slightly misspelled constant name or missing comma, where the intention would still be crystal-clear to any human programmer. I don't see any kind of "exponential" progress whatsoever relating to the fundamental building blocks of artificial thought, which is what the entire "Singularity" premise is based on.


http://www.fightaging.org/archives/2005/09/reading-the-sin.p...

I am prepared to go out on a limb here, as I have done before, and say that business and research cycles that involve standard-issue humans are incompressible beneath a certain duration - they cannot be made to happen much faster than is possible today.

Kurzweil's Singularity is a Vingean slow burn across a decade, driven by recursively self-improving AI, enhanced human intelligence and the merger of the two. Interestingly, Kurzweil employs much the same arguments against a hard takeoff scenario - in which these processes of self-improvement in AI occur in a matter of hours or days - as I am employing against his proposed timescale: complexity must be managed and there are limits as to how fast this can happen. But artificial intelligence, or improved human intelligence, most likely through machine enhancement, is at the heart of the process. Intelligence can be thought of as the capacity for dealing with complexity; if we improve this capacity, then all the old limits we worked within can be pushed outwards. We don't need to search for keys to complexity if we can manage the complexity directly. Once the process of intelligence enhancement begins in earnest, then we can start to talk about compressing business cycles that existed due to the limits of present day human workers, individually and collectively.

Until we start pushing these limits, we're still stuck with the slow human organizational friction, limits on complexity management, and a limit on exponential growth. Couple this with slow progress towards both organizational efficiency and the development of general artificial intelligence, and this is why I believe that Kurzweil is optimistic by at least a decade or two.


Is there any food that makes a substitute for bread on sandwiches?

I've always wondered that. I think I could substantially reduce my carbs but I love sandwiches.

Any ideas?


Ok, I'm really curious why my post in the wrong thread has 6 upvotes?


Lettuce?


So sorry wrong window.


Extrapolating future-history is extremely difficult. You get into this mode where you extend the current set of capabilities and limitations until you hit an edge, and to get past it you invoke magic.

But it is so enjoyable and useful to creatively extrapolate or generate history, if only to encourage us to sink our time and energy into pushing that edge.


If you assume that a breakthrough is possible, and desire its occurrence, then you've got a subjective bias to find a way to logically predict it will happen within the bounds of your lifetime.


Actually, some of the singularity people regard the singularity both inevitably happening soon(ish), and their worst nightmare. Not desirable at all.

The superintelligent machines will take over the world and probably destroy humankind as a side effect. Consequently, they think that we urgently need philosophical musings over possible ways to ensure that the inevitably created superintelligent AI overlords would be build using principles that make them friendly to humankind. This is (according to them) the only hope to save humankind from extinction in near future.


I don't equate desire and positive outcome. Sometimes the waiting is worse than the thing itself. especially if you know it's coming.


The quotes on their website are a hoot. First, there's a modal you have to manually dismiss: "Singularity University is acquiring the Singularity Summit from Singularity Institute."

Then, such gems like "The Singularity Summit is the premier conference on the Singularity. As we get closer to the Singularity, each year's conference is better than the last."

They seem kinda obsessed with "thought leaders" and not so much "thought doers."


"Their website" and "they". Who are you referring to?

Edit: I guess Singularity Institute http://singularity.org/ since the banners and text seem to match up, maybe?

In any case, the news of the transfer of the summit is evidence against your last point. It means (I hope) that Singularity Institute is trying to free up their researchers to be able to concentrate on actual work, rather than organising a conference every year.


Here's a probably important proposition that Peter Thiel and Garry Kasparov have been putting forward, and I have yet to see engaged with and answered:

If we are truly accelerating technologically, why have the exponential gains in wages and PPP in the developed world largely stopped? They have hardly kept up with inflation.

Compare 1891 - 1931, 1931 - 1971, and 1971 - 2011.

People point to improvements in computer technology -- why have they not yielded significant improvements in the world of stuff? There has not been the expected productivity increase at all.


The way I think about this, you have to correct for 'technological deflation.' What I mean by this is, if you only think about monetary loss of purchase power, then you will probably conclude that an iPhone costs something like $500 ( 2000 dollars). However to actually buy an iPhone in 2000 you would need to spend several billions and delivery would take something like ten years. ( Most of this is developing processors and displays that can power an iPhone.) Another example would be flat screens, there were some in 2000 (about as expensive as a car). So you don't see this increase in wages, simply because you are comparing against a moving target.


If that were the case, and computing technology were part of the PPP goods bundle, wages would be seen as going UP.

The question is, if you subtract computers, why do you not see any improvement? Is this not evidence of a technological slowdown, at least in the world of stuff?


My impression is, that the world of stuff moves slower but is moving. ( But I do not have some nice data to back this up, but cars are more fuel efficent, airline tickets decrease in price etc.) And atop of this you get a lot of 'add a computer' inventions. For example (non mobile) phones which store addresses.


Beyond basic subsistence, most things we produce are status symbols that have zero net-effect--guy A's status goes up as a result of his purchase, guy B's status goes down.


This whole thing (futurism, the (+-Not)singularity, what may one day come) is fascinating to me and clearly pretty much everyone.

Getting aside from the pseudo-religious arguments which are often raised in objection (and, personally I believe legitimately), I think we are left with 2 options:

1) the Singularity is a real thing which will happen at some point in the future

2) Consciousness is not able to be crafted by man and the logistics of downloadable consciousnesses and infinately extendable lifespans are forever beyond us, and life will continue pretty much as it has since pre-history, with better technology making a 'richer' life a possibility for a greater and greater proportion of humanity.

As much as I would hope for option 1) the question I feel needs to be asked is why have not other conscious beings (which logically must exist elsewhere in the universe regardless of the numbers you plug into the Drake equation) come to us in their flying robot suits?

The complex interplay of neurology and computing is only in it's infancy, and I wait with bated breath at the advances we are making in our understanding..


Concerning your question, you're probably familiar with these arguments, but here goes.

Assuming it's true, the main conclusion from Fermi's paradox (which is really an observation more than a Paradox) is that there is some kind of "cliff" that vastly reduces the number of civilisations that achieve the technology required to come visit us with their robot suits.

The interesting question is, where is this cliff: before where we are now, or after?

If it is before, then that's great for us. What it means is, for example, that perhaps the evolution of life, or multi-cellular life, or animal life, or intelligent life, etc - is so unlikely that even though there are trillions of trillions of attempts, those that succeed are so far apart that they will never meet (perhaps thanks to the expansion of the universe, or through the difficulty of travelling across interstellar distances, etc). That's the lucky scenario.

The unlucky scenario is that this cliff is after where we are now. Perhaps there are millions of intelligent species even in just the Milky Way, but perhaps intelligent life is doomed to self-destroy eventually (for example by reaching a Singularity, building a Dyson sphere, and then basically disappearing up its own arse; or perhaps by ending in some kind of nuclear war, or biological wipeout after someone's biology experiment goes horribly wrong, etc).

That latter scenario would mean that we're likely to kill ourselves before we get to the stars.

I hope for the first scenario.

Edit: Of course, as pixl97 points out, perhaps Fermi's paradox is an incorrect observation, and the aliens are already here, they're just being really cautious about being observed.


"perhaps the evolution of life, or multi-cellular life, or animal life, or intelligent life, etc - is so unlikely ..."

Bacterial life appeared on Earth pretty much as soon as the planet had cooled down.

An argument could be that some extremely unlikely chemical coincidence was needed to create life, but I think the observation gives more support for the theory that if you have a planet with prebiotic soup, the biotic part is going to kick in pretty soon.

But then it took 4 billion years -- or 30% of the whole universe's lifetime -- for life to invent multicellularity (with some important steps, like the invention of nucleus and thus eukaryotic cells, still unicellular life, half-way in between).

I don't have a problem believing that bacterial life is abundant in the universe. But life on Earth, inventing multicellularity only in 4 billion years, and then sentience only in 0.6 billion more (multicellularity obviously being the harder part), is among the fast ones.

Wait another 4 billion years to give the slower ones a fair chance to discover multicellularity, too.


Is there any reason to think, that if there were many alien civilizations in our galaxy capable capable of radio transmission, that we would be have picked up on the signal? The last I checked (and I admit it has been a few years), the folks at SETI have basically said that even if the Milky Way galaxy was rife with technologically advanced civilizations the chance that we would have observed one is very small, given current efforts to date.

I've read a paper which pretty much lays out exactly the argument you describe, and the whole time I couldn't help but think the author was ignoring the very obvious. We can't even be sure that advanced civilizations would use radio that much, if at all, and even if they did, it's not as though we've conducted such a thorough search that we can confidently say "nope, no radios here". Not even close, in fact.


why have not other conscious beings

1. We are the first, or we are the first to get close to the singularity in our light cone.

or

2. Generating energies or over coming the challenges of even traveling close to the speed of light is hard.

or

3. Space is big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the drug store, but that’s just peanuts to space.

or

4. They have.


5. Would it be "ethycal" for a "post-singularity" civilization to contact a "pre-singularity" one?

Think a bit about this one. Let's define singularity as the ability to either a. upgrade our brains without any biological limits or b. upload our consciousness to a computer that can be upgraded without limits, as immortality and all other consequences will follow from this. Interfering with a pre-singularity race would be the equivalent of us using a newly discovered limitless brain enhancing technology to bring monkeys, dolphins and even cockroaches, as from our infinitely high level downwards a monkey would seen close in intelligence to a cockroach, to human and then limitless superhuman levels! Think about this for a moment! Would you want to waste your technology to "upgrade" inferior life (and possibly trespassing their "free will to choose their own evolutionary strategy"), when you could create new intelligence from scratch that would be designed to evolve faster and be more like you so you could feel empathy for it?

I don't think a post-singularity civilization would bother contacting a pre-singularity one. They would either analyze it stealthly without interfering ...or just use them or their homeworld for resources without caring about their fate or even their survival!

Now, assuming there really is a discernible point of "singularity" by the a. and b. mentioned above, the meeting of two post-singularity civilizations would be a whole different thing. I think we have quite a bit to "grow up" until we get invited into any "galactic club" :)


1. I hope.

2. It doesn't matter; the galaxy can be completely converted to computronium in around 10M years even at 1% of the speed of light.

3. Exponential growth is bigger than space.

4. ...or in a simulation.


>>>3. Exponential growth is bigger than space

Some people think that space is expanding as well. These people think that intra-galactic distances are relatively consistent, but inter-galactic distances are increasing in an accelerating fashion.

"Additionally, the expansion rate of the universe has been measured to be accelerating due to the repulsive force of dark energy which appears in the theoretical models as a cosmological constant. This acceleration of the universe, or "cosmic jerk", has only recently become measurable, and billions of years ago, the universe's expansion rate was actually decelerating due to the gravitational attraction of the matter content of the universe. According to the simplest extrapolation of the currently-favored cosmological model (known as "ΛCDM"), however, the dark energy acceleration will dominate on into the future."

http://en.wikipedia.org/wiki/Metric_expansion_of_space


Point taken. As an explanation for the Fermi Paradox, however, that isn't so useful. :)


This is a question that was bothering me as well. It seems like evolution will always eventually lead to a singularity, so why don't we see it all around us?

Jason Silva has an explanation that I think could be on the right track: http://www.youtube.com/watch?v=nQOyJUDTKdM&noredirect=1


I can't say that I ever bought into the singularity by 2045. Things always take longer than expected. When I was a kid, I always thought that technology in 2001 would be incredible. As it stands now, the US doesn't even have a manned space vehicle.

Anyway, the one good thing that could come out of the Singularity discussion is maybe we can have a concerted effort to make it happen. A lot more research money into big science, for example. Hypersonic flight, maglev trains, the Texas supercollider, space exploration, medical research, clean energy, etc. Imagine if the the first flight happen 100 years before the Wright Brothers, or the telephone had been invented even 50 years earlier. We might not have a Singularity by 2045, but we can actively accelerate our quest for knowledge.


The technology in (real world) 2001 was incredible. But all the progress was in computers and communication, not travel.

Back in the 20th century, every looked at the incredible advances in travel technology from 1850-1950 and assumed the growth curve would just keep on going. But it didn't; it peaked out somewhere in the 1970s. Since then it's gotten cheaper, but not faster.

I think the Singularity is the exact same sort of projection. Progress in computer technology was incredibly fast from 1970 to 2000. My phone has a processor thousands of times faster than my first computer, back in 1982. And if progress continued at that rate, the Singularity probably would be inevitable. But it hasn't. My desktop computer today is not significantly more powerful than the machine I had in 2007. My laptop is much better than my laptop then, but it's only on par with my 2007 desktop. My phone is insanely better than my 2007 phone. The current trend is for computing power to get smaller and cheaper, rather than getting more computing power. That's great, but I don't see it getting us to the Singularity anytime soon.


> The current trend is for computing power to get smaller and cheaper, rather than getting more computing power.

I think it's a mistake to compare specific devices from different eras, like a 2007 desktop and a 2012 desktop. In order to find total computing power, you need to add your phone, tablet, desktop, laptop, and cloud services together. When you do that, you see that we all have far more computing power than we did in the past, but we've chosen to spread that power over a variety of different devices.


Smaller and cheaper means they'll become more ubiquitous.

Your desktop may not be getting much faster, but your TV got smarter, baby toys have more computing power than your first PC, a new car has more smartphone features than your phone from 2007, and everything else seems to be getting more computing power whether it needs it or not.


You can literally call anyone else on the planet using a device the size of a pack of cigarettes, and stream in real time full motion video of a flying cat poptart thing, or watch a beautifully rendered simulation of same, while commenting on it with millions of anonymous other people on the Internet.

The Big Science stuff is cute and all, but don't for a second pretend that technology in common use isn't mindblowingly awesome.


15 years ago you could have called someone anywhere in the world on a device the size of a pack of cigarettes. It was a great phone:

http://en.wikipedia.org/wiki/Motorola_StarTAC

Everything discovered in CERN today could have been discovered 20 years ago.

http://en.wikipedia.org/wiki/Superconducting_Super_Collider

Yes, because of semiconductor advances (Moore's Law) computers and communications advance at a blinding pace. However, we would all benefit if we could push other areas too.

It's not about how far we've come, it's how far we have yet to go.


I agree that there is no convincing evidence the Singularity is so near, but I don't agree with his central argument concerning the necessity of software engineering breakthrough requiring accurate models of human cognition. The AI and ML learning successes we've had the last decade aren't based on developing some kind of imperative algorithm for evaluating a specific problem domain. We don't need one for cognition either; we just need a machine that can learn how to learn better. That is really all the singularity is; unbounded recursive self-improvement.


Hmm, Kurzweil joins Google -> Allen says Singularity is bunk. Looks like there has been an ongoing exchange, but funny that both items make the front page of HN on the same day...


This is from 2011, so it had absolutely nothing to do with Kurtzweil joining Google.


It's almost like someone tries to discredit him or Google for hiring Ray Kurzweil.


I would suppose that by 2045 we might technologically be capable of "the singularity" but i'm not convinced we'd be socially accepting of a singularity. The old saying, the future is today its just not evenly distributed will still apply 30 years from now.

If you could quantify social progress, I would theorize that it would be a linear growth. Which means, our capability, and how it fits in our society are becoming decoupled from each other at an increasing amount every year.


Thesis: The singularity must be prevented at all costs.

Evidence: The increasing uselessness of human beings to economic activity.

Evidence: The callousness to economic externalities with biological consequences of existing suprahuman organisms. i.e. Monsanto, Dupont, Keystone XL, the US Government

Conclusion: The singularity would most likely result in the complete extinction of biological human beings.

Nota Bene: Structured like a High School debate contrapositive because that's what the singularity always sounds like.


It's only 32 years until 2045. 32 years ago was 1980.

Emacs was first released in 1976.

Lisa with its mouse and GUI was first developed in 1978.

The first handheld mobile phone was demonstrated in 1973.

The TCP/IP protocol was first standardized in 1982.

Thirty years on we're still relying on the same core technologies, though more mature.

What new technologies have appeared in recent years that will become more mature in 5, 10, 20, 30 years?


Can anyone point me to information about this question: "What have we done to discover and understand the differences between human and animal brains, to see what accounts for the vast difference in intelligence?"

I'm sure I could find a lot by Googling, but asking this group seems to be a more efficient way to benefit from some curation.


I know there is a lot of on-going research in learning the 'operating system' of the brain across quite a few animal models. In rats, mice, and monkeys, a lab where I work looks at the neuron level while these animals learn behavioral tasks. Classic behavioral tasks are merging with basic neuroscience and advanced imaging, with the help of computing.

At the neuron level, we are still at very immature levels of understanding, but the path and direction being taken looks very promising.


Well, a starting place is to just read about the structure of the human brain on Wikipedia, versus that of other mammals. One of the key differences is just the size of the human brain compared to body mass.


I thought the singularity just meant that technological change was so rapid that it ceased to have meaning.

I feel like we might be there today. I'm not sure what could be engineered tomorrow that would surprise me meaningfully.

If someone says that they have a working fusion reactor, I'd just be like, "Geez, finally. What the hell took so long?"


This is discussion about intelligence. But do we have clear definition of intelligence?

Without clear definition one can say that the Singularity already happened.

Chess masters definitely are considered intelligent. So why computer programs which beat them aren't called super intelligent? Because they use brute force algorithms?


Because the chess engines aren't sufficiently general purpose.

http://lesswrong.com/lw/vb/efficient_crossdomain_optimizatio... tries to address questions like the ones you ask. (Obviously the definition proposed there of "efficient cross-domain optimisation" isn't the definition of the word "intelligence", because no such thing can exist, but it is a definition that seems to match with our intuitions well.)


I think when intelligence is mentioned in these discussions it is implicitly defined as "being reasonably capable of doing everything a human can."

However, intelligence can also be defined as "the ability to maximize a risk/reward function." In this case, Kurzweil and Allen aren't referring to a specific function, so the first definition supersedes this second, more precise definition.


Note: The linked article was published in October 2011. It's not exactly news, but does make for stimulating reading!


"The singularity isn't near". Only became obvious when Kurzweil signed with Google???


I'm fairly skeptical of a near-term singularity, but he provided no evidence that technical growth isn't exponential. Exponential doesn't mean "fast". He established that problems that were once impossible are still very difficult.

Economic growth is faster than exponential. We see increasingly rapid growth at any point in time: pre-Cambrian vs. post-Cambrian evolutionary, pre-mammalian vs. mammalian, evolutionary vs. paleolithic, pre-agrarian vs. agrarian, agrarian vs. industrial.

I don't think we're going to see a "Singularity" in 2045. We might see 10-50% annual economic growth by then. That wouldn't surprise me. At this point, we're probably making serious in-roads on a wide variety of health problems, and life expectancy at birth will probably be over 85 and may be undefined. I think it's a good bet that someone born in 2045 will see 3000, not because of a Singularity, but because such a person won't even begin to experience old age until the 22nd century.


Student of Applied Mathematics - Economics here. I took a class with Oded Galor, the primary proponent of Unified growth theory [1], a single model that describes the transition from the Maltusian trap [2] to an era of rapid growth, and finally a transition to a sustained growth regime. According to this economic growth model, the United States has already reached the sustained growth regime - meaning that we can expect continued growth of, say, 1-2% for eternity.

Economic growth is NOT faster than exponential in a sustained growth steady-state. Do some googling for long-term economic forecasts, and you will find much research that supports low single-percent growth for the 21st century [3]

[1] http://en.wikipedia.org/wiki/Unified_growth_theory

[2] http://en.wikipedia.org/wiki/Malthusian_trap

[3] http://www.economist.com/blogs/buttonwood/2012/11/economic-o...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: