Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Carmack actually discusses this in the podcast when Neuralink is brought up. He seems extremely excited about the product and future technology (as am I), but he provides some, in my opinion, pretty convincing arguments as to why this probably won't happen and how at a certain point AGI will overshoot us without any way for us to really catch up. You can scale and adjust the architecture of a man-made brain a lot more easily than a human one. But I do think it's plausible that some complex thought-based actions (like Googling just by thinking, with nearly no latency) could be available within our lifetimes.

Also, although I believe consciousness transfer is probably theoretically achievable - while truly preserving the original sense of self (and not just the perception of it, as a theoretical perfect clone would) - I feel like that's ~600 or more years away. Maybe a lot more. It seems a little odd to be pessimistic of AGI and then talk about stuff like being able to leave our bodies. This seems like a much more difficult problem than creating an AGI, and creating an AGI is probably the hardest thing humans have tried so far.

I'd be quite surprised if AGI takes longer than 150 years. Not necessarily some crazy exponential singularity explosion thing, but just something that can truly reason in a similar way a human can (either with or without sentience and sapience). Though I'll have no way to actually register my shock, obviously. Unless biological near-immortality miraculously comes well before AGI... And I'd be extremely surprised if it happens in like a decade, as Carmack and some others think.



I'm no Carmack but I do watch what is happening in the AI space somewhat closely. IMHO "brain" or intelligence cannot exist in void - you still need an interface to the real world and some would go as far as to say that consciousness is actually the sensory experience of the real world replicating your intent (ie you get the input and predict an output or you get input + perform an action to produce an output) plus the self referential nature of humans. Whatever you create is going to be limited by whatever boundaries it has. In this context I think it's far more plausible for super-intelligence to emerge and be built on human intelligence than for super-intelligence to emerge in void.


How would this look, exactly, though? If you're augmenting a human, where exactly is the "AGI" bit? It'd be more like "Accelerated Human Intelligence" rather than "Artificial General Intelligence". I don't really understand where the AI is coming in or how it would be artificial in any respect. It's quite possible AGI will come from us understanding the brain more deeply, but in that case I think it would still be hosted outside of a human brain.

Maybe if you had some isolated human brain in a vat that you could somehow easily manipulate through some kind of future technology, then the line between human and machine gets a little bit fuzzy. In that respect, maybe you're right that superintelligence will first come through human-machine interfacing rather than through AGI. But that still wouldn't count as AGI even if it counts as superintelligence. (Superintelligence by itself, artificial or otherwise, would obviously be very nice to have, though.)

Maybe you and I are just defining AGI differently. To me, AGI involves no biological tissue and is something that can be built purely with transistors or other such resources. That could potentially let us eventually scale it to trillions of instances. If it's a matter of messing around with a single human brain, it could be very beneficial, but I don't see how it would scale. You can't just make a copy of a brain - or if you could, you're in some future era where AGI would likely already have been solved long ago. Even if every human on Earth had such an augmented brain, they would still eventually be dwarfed by the raw power of a large number of fungible AGI reasoning-processors, all acting in sync, or independently, or both.


yes. we probably have different definitions for AGI. For me artificial means that it’s facilitated and/or accelerated by humans. You can get to the point where there are 0 biological parts and my earlier point is that there would probably be multiple iterations before this would be a possibility. If I understand you correctly you want to make this jump to “hardware” directly. Given enough time I would not dismiss any of these approaches although IMHO the latter is less likely to happen.

also, augmenting a human brain for what I’m describing does not mean that each human would get their brain augmented. It’s very possible that only a subset of humans would “evolve” this way and we would create a different subspecies. I’m not going to go into the ethics of the approach or the possibility that current humans will not like/allow this, although I think that the technology part would not be enough to make it happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: