Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He talks about AGI at https://youtu.be/udlMSe5-zP8?t=2776

I wonder if all the really smart people who think AGI is around the corner know something I don't. Well, clearly they know a lot of things that I don't, but I wonder if there's some decisive piece of information I'm missing. I'm a "strict materialist" too, but that doesn't mean I think we can build a brain or a sun or a planet or etc within X years, it just means that I think it's technically possible to build those things.

I don't see how we get from "neural net that's really good at identifying objects" to "general intelligence". The emphasis on computational power also makes no sense to me. If we had infinite compute today, what steps would you take to build AGI? Does anyone have any good ideas about that?

Sometimes I wonder if AGI (and the concept of a "technological singularity") isn't just "intelligent design for people with north of 140 IQ". Maybe really smart people tend to develop a blindspot for really hard problems (because they've solved so many of them so effectively).



I think you have a point.

AGI is a scientific problem of the hardest kind, not an engineering problem where you just use existing knowledge to build better and better things.

Marving Minsky once said that in mathematics just five axioms is enough to provide the amount of complexity that overwhelms the best minds for centuries. AGI could be messy practical problem that depends on 10 or 25 fundamental 'axioms' that work together to produce general intelligence. "I bet the human brain is a kludge." - Marvin Minsky

The idea that if many people think this problem very hard, the problem is solved in our lifetime is prevalent. It's not true in math and physics, so why would AI be any different? Progress is made but you can't know if there is breakthrough tomorrow or if it happens 100 years from now. Just adding more computational capability is not going to solve AI.

Currently it's the engineering applications and the use of the science what is exploding and getting funded. In fact, I think some of the best brains are lured from the fundamental research into applied science with high pay and resources. What the current state of the art can do now has not been utilized fully in the economy and this brings in the investments and momentum.


It seems like a similar parallel is with the enthusiasm with self-driving cars. There was an initial optimism (or hype) fueled by the success of DL with perception problems. But conflating solving perception with the larger, more general problem of self-driving leads to an overly optimistic bias.

Much of the take-aways from this year's North American International Auto Show was that the manufacturer's are reluctantly realizing the real scope of the problem and trying to temper expectations. [0]

And self-driving cars is still a problem orders of magnitude simpler than AGI.

[0] https://www.nytimes.com/2019/07/17/business/self-driving-aut...


Re: Comparing self-driving cars to AGI: It's counterintuitive, but depending how versatile the car is meant to be, the problems might actually be pretty close in difficulty.

If the self-driving car has no limits on versatility, then, given an oracle for solving the self-driving car problem, you could use that to build an agent that answers arbitrary YES-NO questions. Namely: feed the car fake input so it thinks it has driven to a fork in the road and there's a road-sign saying "If the answer to the following question is YES then the left road is closed, otherwise the right road is closed."

Compare with e.g. proofs that the C++ compiler is Turing complete. These proofs involve feeding the C++ compiler extremely unusual programs that would never actually come up organically. But that doesn't invalidate the proof that the C++ compiler is Turing complete.


That's the problem with all of the fatuous interpretations floating around of "level 5" self-driving.

"It has to be able to handle any possible conceivable scenario without human assistance" so people ask things like "will a self-driving car be able to change its own tyre in case of a flat" and "will a self-driving car be able to defend the Earth from an extraterrestrial invasion in order to get to its destination".

They need to update the official definition of level 5 to "must be able to handle any situation that an average human driver could reasonably handle without getting out of the vehicle."

(Although the "level 1" - "level 5" scale is a terrible way to describe autonomous vehicles in any case and needs to be replaced with a measure of how long it's safe for the vehicle to operate without human supervision.)


Very well put. And you could argue that it is not as much a stretch as it seems.

Self driving cars would realistically have to keep functioning in situations where arbitrary communication with humans is required (which happens daily), which tends to turn into an AI-hard problem quite quickly.


Good points.

I was thinking in terms of "minimum viable product" for self-driving cars, which I have a hunch will be of limited versatility compared to what you describe. To have a truly self-driving car as capable as humans in most situations, you may be right.


They already made a minimum viable product self-driving car. It's called a "train".


I know this is meant jokingly, but for many cities (especially relatively remote ones), trains are not considered viable because they have strictly defined routes.

Many cities choose to forego trains for busses in large part due to the lower upfront costs and the ability to change routes as the needs of the populace change.


Also, we know what a self driving car is, how to recognize it, and even measure it.


>And self-driving cars is still a problem orders of magnitude simpler than AGI.

You sure? It might very well be a single order of magnitude harder, or not any harder. Given that solving all the problems of self driving even delves into questions of ethics at times (who do I endanger in this lose lose situation, etc)


I could certainly be wrong, it's just speculation on my part on the assumption that self-driving issues would be a smaller subset of AGI problems.

I actually don't think the ethics part is all that hard if (and that's a big if) there can be an agreement on a standard approach. An example would be a utilitarian model, but this often is not compatible with egalitarian ethics. This approach reeks of technocracy but it's certainly a solvable problem.


Nature has already solved AGI. Now we just need to reverse engineer it.


"just"?

Neuroscience is full of problems of the hardest kind.


Yes, one of Paul Alan’s gifts to the world should help:

https://alleninstitute.org/


Unfortunately, Von Neumann is long dead, so we only have damaged approximations of AGI to work with.


We can say this about anything in the universe though.


Nah not really, there is loads of stuff invented by by humans that, as far as we know, did not appear in the universe before we did it. For example, I'm unaware of any natural implementation of a free spinning wheel attached to an axle.


I agree with you that AGI is not around the corner. I think the people who do believe that are generally falling for a behavioral bias. They see the advances in previously difficult problems, and extrapolate that progress forward, when in reality we are likely to come against significant hurdles before we get to AGI.

Also, seeing computers perform tasks they haven't done before can convince people that the model behind the scenes is closer to AGI than it really is. The fact that deep neural networks are very hard to decifer only furthers the mystical nature of the "intelligence" of the model.

Also, tasks like playing starcraft are very impressive, but are not very close to true AGI in my opinion. Perhaps theres a more formal definition that I'm not aware of, but in my mind, AGI is not being good at playing starcraft, AGI is deciding to learn to play starcraft in the first place.

That's my 2 cents, anyways.


It's like if someone watches "2001: A Space Odyssey" and takes HAL as the model for AI, so they work really hard and create a computer capable of playing chess like in the movie. "Well, that's not really the essence of HAL, it's just that HAL happened to play chess in one scene." So then they work really hard some more, and extend the computer to be able to recognize human-drawn sketches. "Well, that's still not really the essence of HAL, it's just that HAL did that in one particular scene." So they work still harder and create Siri with HAL's voice, and improve its conversation skills until it can duplicate the conversations from the film (but it still breaks down in simple edge cases that aren't in the film). "Well, that's still not the essence of HAL..."

The Greeks observed these limitations thousands of years ago. Below is an excerpt from Plato's "Theaetetus":

Socrates: That is certainly a frank and indeed a generous answer, my dear lad. I asked you for one thing [a definition of "knowledge"] and you have given me many; I wanted something simple, and I have got a variety.

Theaetetus: And what does that mean, Socrates?

Socrates: Nothing, I dare say. But I'll tell you what I think. When you talk about cobbling, you mean just knowledge of the making of shoes?

Theaetetus: Yes, that's all I mean by it.

Socrates: And when you talk about carpentering, you mean simply the knowledge of the making of wooden furniture?

Theaetetus: Yes, that's all I mean, again.

Socrates: And in both cases you are putting into your definition what the knowledge is of?

Theaetetus: Yes.

Socrates: But that is not what you were asked, Theaetetus. You were not asked to say what one may have knowledge of, or how many branches of knowledge there are. It was not with any idea of counting these up that the question was asked; we wanted to know what knowledge itself is.--Or am I talking nonsense?


This is a great example of 1 of the 2 fundamental biases Kahneman identifies in Thinking Fast and Slow: answering a difficult question by replacing it with a simpler one.

The other one (also perhaps relevant to the general topic of this thread): WYSIATI (What You See Is All There Is).


This is a good example of Nassim Taleb's Ludic Fallacy: https://en.wikipedia.org/wiki/Ludic_fallacy


The problem here seems to be that you think the state of the art resembles “being really good at identifying objects”. This makes it clear that you are not keeping up with the frontier. I recommend looking up DeepMind’s 2019 papers, they are easily discoverable.

When you read them, you will probably update in the direction of “AGI soon”. It’s possible that you won’t see what the big deal is, I suppose. I personally see what Carmack and others see, a feasible path to generality, and even some specific promising precursors to generality.

It also helps to be familiar with the most current cognitive neuroscience papers, but that’s asking a lot.


You're going to have to be more specific about what constitutes a major advance forwards. So far DeepMind's work (while impressive) has proven to be very brittle, and not transferable without extensive "fine-tuning". Previous attempts at transfer learning have been mixed to say the least.

I'm going to be pessimistic and say that AGI is probably decades away (if not centuries away for a human-like AGI). There are clearly many biological aspects of the brain that we do not understand today, and likely will not be able to replicate without far more advanced medical imaging techniques.


What are some of the highlights from DeepMind that gives you optimism for a path to AGI? I am not seeing it, personally.


Is there anything in the structure of the brain that makes you think "of course this is an AGI"? For me, the answer is no. That's why I think progress on narrow AI and AGI is going to be unpredictable. Nobody will see the arrival of an AGI until it's here.


Some also think that nobody will see the arrival of an AGI even after it’s here, because after arrival there will be no one left to see.


Various meta-learning approaches and advancements in unsupervised learning and one-shot learning.


I would like to know as well


Can you explain the general path in layman's terms in a few sentences? As far as I can tell AI is really good at analyzing large datasets and recognizing patterns. It can then implement processes based on what it's learned from those patterns. It all seems to be very specific and human directed.



I'm not well versed in this area, but from my perspective, I see this as the fundamental problem:

Every action my brain takes is made of two components: (1) the desired outcome of the thought, and (2) the computation required to achieve that outcome. No matter how well computers can solve for 2, I have no idea how they'd manage solving for 1. This is because in order to think at all, I have to have a desire to think that thought, and that is a function of me being an organism that wants to live, eat, sleep, etc.

So for me, I just wonder how we're going to replicate volition itself. That's a vastly different, and vastly more complicated, problem.


It isn't hard to give an AI a goal but it is hard to do so safely. As a toy example we could design an AI who treated say, reducing carbon emissions as it's goal just as you treat eating and sleeping as yours. The issue is that the sub goals to accomplish that top-level goal might contain things we didn't account for, say destroying carbon-emitting technology and/or the people that use it.


Human's have many basic goals that are very dangerous when isolated in that way. It seems to me that nature didn't care (and of course, can't care) if it was dangerous at all when coming up with intelligence. Maybe we shouldn't either if we want to succeed with replicating it.

Worrying about some apocalypse seems counterproductive to me.


I agree that there's some aspect of volition, desire, the creative process...whatever you want to call that aspect of human thought that seems to arise de novo.

But speaking of de novo, I'm not at all sure that a desire to think a thought it required in order to think. The opposite seems closer: the less one tries to think, the more one ends up thinking.

I'm pivoting from your point here, but I see that bit as the hurdle we're not close to overcoming. We are likely missing huge pieces of the puzzle when it comes to understanding human "intelligence" (and intelligence itself is not the full picture). With such a limited understanding, a replication or full superseding in the near future seems unlikely. Perhaps the blind spot of the experts, as /u/leftyted alluded to, is that their modest success so far has generated a reality-distorting hubris.


It's like the more advanced stage of "a little bit of information is a dangerous thing".


If I'm remembering my terms right, "embodied AI" is one theory or group of theories about interaction with an environment creating the volition necessary before generalized AI can be created.


There's active research in Model-Based RL right now that tries to tackle 1) and 2) together.


I think people also have a very hard time conceptualizing the amount of time it took to evolve human intelligence. You're talking literally hundreds of millions of years from the first nerve tissues to modern human brains. I understand that we're consciously designing these systems rather than evolving them, but nevertheless that's an almost incomprehensible amount of trial and error and "hacking" designs together, on top of the fact that our understanding of how our brains work is still incomplete.


I thought you were going to go the other direction with your first sentence. It took some 4 billion years to go from the first cell to the first homosapien. Maybe another 400,000 years to get from that to how we are today.

That means 0.01% of the timeline was all it took for us to differentiate ourselves from regular animals who aren't a threat to the planet.

0.01% of 100 years is 3 days.


That's a very anthropocentric view, and not how the timeline works. Unicellular organisms are also smart in a way computers can't exactly replicate. They hunt, eat, sense their environment, reproduce when convenient, etc. All of these are also intelligent behaviours.


And just 4 hours for AlphaZero to teach itself chess, and beat every human and computer program ever created....

DNA sequencing went from $3b per genome to $600, in about 30 years, much, much faster than Moore's "law".


Why do you say "much, much faster"? $600 to $3 billion is about the same as going 2^9 (512) to 2^32 (4.3B), which requires 23 doublings. Moore's law initially[1] specified a doubling every year (30 years would be 30 doublings), then was revised to every two years (15 doublings), but is often interpreted as doubling every 18 months (20 doublings). Seems pretty close to me!

[1] https://en.wikipedia.org/wiki/Moore%27s_law


Flight took a while to evolve too.


I don't think that's the same. We're not trying to reverse engineer flight. We're trying to reverse engineer how we reverse engineered flight.


The thing is, airplanes are not based on reverse-engineered birds. Cutting edge prototypes still struggle to imitate bird flight, because as it turns out big jet turbines are easier to build. It could very well be easier to engineer a "big intelligence turbine" than it would be to make an imitation brain.


> It could very well be easier to engineer a "big intelligence turbine"

Is that not what a computer is? We have continuously tried and failed to create machines that think, react, and learn like the brains of living things, and instead managed to create machines that manage to simulate or even surpass the capabilities of brains in some contexts, while still completely failing in others.


A difference here is that flight also evolved and re-evolved over and over. General intelligence of the scale and sort that humans feature just once (that we know of and very likely in history).


That's influenced by the anthropic principle. The first species to obtain human-level intelligence is going to have to be the one that invents AI, and here we are.


Also as a strict materialist, after reading estimates from lots of different people from lots of different disciplines, and integrating and averaging everything, I think we'll have likely have human-level or above AGI around 2060 - 2080. I think it's relatively unlikely it'll happen past 2100 or before 2050. I'd even consider betting some money on it.

I'm kind of coming up with these numbers out of thin air, but as much of a legend as he is, I agree Carmack's estimate seems way too optimistic to me. It's possible, but unlikely to me.

That said:

>The emphasis on computational power also makes no sense to me. If we had infinite compute today, what steps would you take to build AGI? Does anyone have any good ideas about that?

In this interview with Lex Fridman and Greg Brockman, a co-founder of OpenAI, he says it's possible increasing the computational scale exponentially might really be enough to achieve AGIc: https://www.youtube.com/watch?v=bIrEM2FbOLU. (Can't remember where he said it exactly, but I think somewhere near the middle.) He's also making a lot of estimates I find overly optimistic, with about the same time horizon as Carmack's estimate.

As you say, it can be a little confusing, because both John Carmack and Greg Brockman are undoubtedly way more intelligent and experienced and knowledgeable than I am. But I think you're right and that it is a blindspot.

By contrast, this JRE podcast with someone else I consider intelligent, Naval Ravikant, essentially suggests AGI is over 100 years away: https://www.youtube.com/watch?v=3qHkcs3kG44. I think he said something along the lines of "well past the lifetimes of anyone watching this and not something we should be thinking about". I think that's possible as well, but too pessimistic. I probably lean a little closer to his view than to Carmack's, though.


I believe that 100 years is optimistic. I would say that it's hundreds of years away if it's going to happen at all.

My bet is that humans will go the route of enhancing themselves via hardware extensions and this symbiosis will create the next iteration(s) in our evolution. Once we get humans that are in a league of their own with regards to intelligence they will continue the cycle and create even more intelligent creatures. We may at some point decide to discard our biological bodies but it's going to be a long transition instead of a jump and the intelligent creatures that we create will have humans as a base layer.


Carmack actually discusses this in the podcast when Neuralink is brought up. He seems extremely excited about the product and future technology (as am I), but he provides some, in my opinion, pretty convincing arguments as to why this probably won't happen and how at a certain point AGI will overshoot us without any way for us to really catch up. You can scale and adjust the architecture of a man-made brain a lot more easily than a human one. But I do think it's plausible that some complex thought-based actions (like Googling just by thinking, with nearly no latency) could be available within our lifetimes.

Also, although I believe consciousness transfer is probably theoretically achievable - while truly preserving the original sense of self (and not just the perception of it, as a theoretical perfect clone would) - I feel like that's ~600 or more years away. Maybe a lot more. It seems a little odd to be pessimistic of AGI and then talk about stuff like being able to leave our bodies. This seems like a much more difficult problem than creating an AGI, and creating an AGI is probably the hardest thing humans have tried so far.

I'd be quite surprised if AGI takes longer than 150 years. Not necessarily some crazy exponential singularity explosion thing, but just something that can truly reason in a similar way a human can (either with or without sentience and sapience). Though I'll have no way to actually register my shock, obviously. Unless biological near-immortality miraculously comes well before AGI... And I'd be extremely surprised if it happens in like a decade, as Carmack and some others think.


I'm no Carmack but I do watch what is happening in the AI space somewhat closely. IMHO "brain" or intelligence cannot exist in void - you still need an interface to the real world and some would go as far as to say that consciousness is actually the sensory experience of the real world replicating your intent (ie you get the input and predict an output or you get input + perform an action to produce an output) plus the self referential nature of humans. Whatever you create is going to be limited by whatever boundaries it has. In this context I think it's far more plausible for super-intelligence to emerge and be built on human intelligence than for super-intelligence to emerge in void.


How would this look, exactly, though? If you're augmenting a human, where exactly is the "AGI" bit? It'd be more like "Accelerated Human Intelligence" rather than "Artificial General Intelligence". I don't really understand where the AI is coming in or how it would be artificial in any respect. It's quite possible AGI will come from us understanding the brain more deeply, but in that case I think it would still be hosted outside of a human brain.

Maybe if you had some isolated human brain in a vat that you could somehow easily manipulate through some kind of future technology, then the line between human and machine gets a little bit fuzzy. In that respect, maybe you're right that superintelligence will first come through human-machine interfacing rather than through AGI. But that still wouldn't count as AGI even if it counts as superintelligence. (Superintelligence by itself, artificial or otherwise, would obviously be very nice to have, though.)

Maybe you and I are just defining AGI differently. To me, AGI involves no biological tissue and is something that can be built purely with transistors or other such resources. That could potentially let us eventually scale it to trillions of instances. If it's a matter of messing around with a single human brain, it could be very beneficial, but I don't see how it would scale. You can't just make a copy of a brain - or if you could, you're in some future era where AGI would likely already have been solved long ago. Even if every human on Earth had such an augmented brain, they would still eventually be dwarfed by the raw power of a large number of fungible AGI reasoning-processors, all acting in sync, or independently, or both.


yes. we probably have different definitions for AGI. For me artificial means that it’s facilitated and/or accelerated by humans. You can get to the point where there are 0 biological parts and my earlier point is that there would probably be multiple iterations before this would be a possibility. If I understand you correctly you want to make this jump to “hardware” directly. Given enough time I would not dismiss any of these approaches although IMHO the latter is less likely to happen.

also, augmenting a human brain for what I’m describing does not mean that each human would get their brain augmented. It’s very possible that only a subset of humans would “evolve” this way and we would create a different subspecies. I’m not going to go into the ethics of the approach or the possibility that current humans will not like/allow this, although I think that the technology part would not be enough to make it happen.


I am not an expert, but I don't think computational power is the limitation. It's the amount of data processed. Our brains are hooked up to millions of sensory signals, some of which have been firing 24/7 for decades. Also our brains come with some preformed networks (sensory input feeding into a region with a certain size and shape) that took millions of years to "train". Even then, our brains take 20-25 years to mature.

Machine learning at this point seems closer to a tool designed analytically (feeding it well-formed data relevant to the task, hand-designing the network) than to AGI.


Things that support the notion that it is soon are that napkin math suggests the computational horsepower is here now, and that we have had few instances of sudden, unexpected advances in how well neural networks work. (Alpha Go, Alpha Zero, etc).

One might extrapolate that there is a chance that in 10 years, when the computational horsepower is available to more researchers to play with, and we get another step-change advance, that we will get there.

My own feeling is that it is possible AGI could happen soon, but I don't expect it will.


This is how I feel about AGI too, and I also include self-driving cars. I don't think those are just around the corner either.

In general I don't think our current approach to AI is all that clever. It brute forces algorithms which no human has any comprehension of or ability to modify. All a human can do is modify the input data set and hope a better algorithm (which they also don't understand) arises from the neural network.

It's like a very permissive compiler which produces a binary full of runtime errors. You have to find bugs at runtime and fiddle with the input until the runtime error goes away. Was it a bug in your input? Or a bug in the compiler? Who knows. Change whichever you think of first. It's barely science and it's barely a debug workflow.

What pushed me all the way over the edge was when adversarial techniques started to be applied to self-driving cars. That white paper made them look like death machines. This entire development process I am criticising assumes we get to live in the happy path, and we're not. The same dark forces infosec can barely keep at bay on the internet, and have completely failed to stop on IoT, will now be able to target your car as well.

Worst thing is all our otherwise brilliant humans like Carmack are gonna be the guinea pigs in the cars as they head off toward their next runtime crash.


The economics of the situation aren't friendly to humans, because human intelligence doesn't scale up well. Take energy consumption-- once you're providing someone 3 square meals they can't really use any extra energy efficiently. So we try training up lots of smart people and having them work together, but that causes lots of other problems-- communication issues, office politics, etc.

Additionally you can't replicate people exactly, so even when Einstein comes along we only have him for a short while. When he passes away we regress.

Computers are completely different. We can ring them in power plants, replicate them perfectly, add new banks of CPUs, of GPUs, wire internet connections directly into them, etc.

This didn't used to matter because the old "computers can only do exactly what you tell them to do, just really fast" limitation. Now that computers are drawing, making art, modifying videos, playing Chess and Go preternaturally, playing real time strategy games well, etc we can see that that limitation doesn't really hold anymore.

At this point the economics start to really kick in. More machine learning breakthroughs + much, MUCH bigger computers over the next decades are going to be interesting.


Einstein comes along only once but his knowledge lives after his death. The same way he iterated on the knowledge of those before him.

If you give Deepmind "x" times the compute power (storage, whatever) it just plays Starcraft better. It's not going to arrange tanks into an equation that solves AGI.

That breakthrough will be assisted by computers I'm sure, but the human mind will solve it.


Also I think that CS people's understand of neurons are horribly underestimated. The idea that there are bits 'in' neurons is a misconception. They each neuron is a multi-cellular entity with variate modes of interaction and activation.

So these napkin estimates comparing brainpower what server farms can do doesn't inform us at all about how that gets us closer to AGI.


I always wonder how they think AGI is close when neuroscience is still scratching in the dark with brain scans, and we don’t know how digestion works 100% nor how to build a single cell in a lab then and have it skip millions of years in evolution to make a baby in 9 months. The AGI will definitely be different in structure than a human brain. Will it have a microbiome to influence its emotions and feelings?


You don't need AGI to do serious damage. I think it's just easier for the layperson to reason about the ethics and implications of AGI than it is to reason about how various simpler ML models can be combined by bad actors to affect society and hurt the common good.


One such missing piece could be that AGI already exists, but is kept behind NDAs.


Speaking as someone doing research in this field, I have an unbelievably hard time imagining this to be the case.

The ML community is generally extremely open, and people know what the other top people are working on. If an AGI was developed in secret, it would have to be without the involvement of the top researchers.


You probably have a blind spot for people that are not able to speak English and are working in conditions that are kept secret by design. Coincidentally I know someone in that situation who works on AI since at least two decades and who has kept radio silence since one decade on what he's working on exactly.


Without the involvement of who we think are the top researchers. If I were smarter and had more time, I would look for bright young researchers who published early and then stopped, but are still alive.


you're assuming that AGI is going to come from ML. While interesting I strongly believe that ML is not going to generate anything close to AGI ever. ML is more like our sense organs that it is to our brain. It can take care o processing some of the input we receive but I don't see it moving past that. super advanced ML + something else will probably be at the root of what could evolve into AGI.


If someone has discovered AGI, it should be trivial for them to completely dominate absolutely everything. There would be no more need for capitalism or anything, we would be in a post-singularity world.


Imagine you'd have discovered AGI and want to exploit it as best as you can without having everyone else notice that you have discovered AGI.


You don't even need to do the imagining yourself. You can just have your AGI do that imagining for you. "Computer, come up with a way for us to take over the world without anyone noticing."


And then hope that the answer isn't "I'm sorry, Dave, I'm afraid I can't do that."


Kind of a tangent but I can't see why we wouldn't be able to make "A"GI using human brain organoids.

https://en.wikipedia.org/wiki/Cerebral_organoid

I know of at least two people that are eager to make "Daleks" and given the sample size there must be many more.


> Sometimes I wonder if AGI (and the concept of a "technological singularity") isn't just "intelligent design for people with north of 140 IQ"

You're not the first person to express this idea, but it's pure speculation. There is obviously a possibility that it will be proved correct at some point in the future. But historically, very smart people have been ridiculed like clockwork for expressing ideas that were beyond their time but philosophically (and physically, eventually technologically) possible.

I'd be wary of adding to such sentiment. It also feels suspiciously like an ad hominem criticism, although in your case it's expressed more like a question. I think there is clearly something to the idea of very smart people having an intellectual disconnect with the reasoning of their closer-to-average peers (and hence expressing things that seem ludicrous, without considering how they will be received), but not one that negatively affects the quality of their deductions.

IMHO, the ideas of AGI and a "technological singularity" (let's call it economic growth that's extremely more powerful than anything seen up until now) aren't so different from earlier, profound developments in human history. The criticism of "smart people developing a blind spot" could have been applied equally to e.g. the ideas of agriculture and the following power shift, industry, modern medicine, powered flight and spaceflight, nuclear weapons or computers, networking and robotics.

All these ideas put the world into an almost unimaginally different state, when seen with the eyes of an earlier status quo. Maybe AGI is relatively different, it's hard to say without having lived in ancient Egypt. It's certainly qualitatively different, since it involves changes to intelligent life, but I'm not sure the idea feels much more alien than things we've already experienced.


He did couch it in the caveat that once the hardware is there it'd more be a matter of thousands of people throwing themselves at the problem -- we're waiting I guess for the hardware to be good/cheap enough for those people to be widespread


I sort of agree with your skepticism, but you gotta admit that some of the things the ML folks are doing are uncanny in terms of how they seem to model the human visual system and perform other human-like tasks. Additionally, we already have tons of CPU horsepower that can get close in terms of raw processing ability. Even though we don't yet know what the missing "special sauce" is, I don't think it's inconceivable that someone in 5 years figures it out (though 50 years is just as likely)


I know it’s just a splinter of AGI, but conversational language understanding and generation is undergoing some rapid advancement. This subreddit is all GPT2 bots and while moat of it is still bad, there are glimpses of the future in there. (Note: Some of it is NSFW)

https://www.reddit.com/r/SubSimulatorGPT2/


Reading the AI go FOOM debate solidified a lot of the mushy parts of my "singularitianism"

I think the linchpin of my belief is recursive self improvement. I think machine intelligences are a different kind of substance with different dynamics than the ones we typically encounter.

I don't think someone will compile the first AGI and presto there is it. I think a long running system of processes will interact and rewrite its own code to produce something, which eventually a reasonable boundary could be drawn to distinguish the system and anyone interacting with the system would say: "this thing is intelligent, the most intelligent thing on the planet". It would have instant access to all written knowledge, essentially unbounded power to compute new facts and information and model the world to as accurate of an approximation as needed to produce high confidence utterances.

I just don't see how a system like that couldn't come into existence one day. Issues around timelines are completely unknowable to me. But I would put a distribution of something like I would be surprised if it happened in the next 50 years and shocked if it didn't happen within the next 1000. Very fuzzy, but it "feels" inevitable.

If a collection of unthinking cells can coordinate and produce the feeling of conscious experience then I can't see what would stop silicon from producing similar behavior without many bounds inherent in biological systems.


But that's the rub. Biological systems are not just random interactions. The entire system is meticulously orchestrated by DNA, RNA, etc. We don't even fully understand yet how it all works together, but it's very clear that these processes have evolved to work together to achieve something that none of them could have ever achieved alone.


Biological systems climb up energy gradients and outcompete other systems.

Artificial systems should be able to climb given a suitable gradient. I think the hard part of AGI is going to be designing the environment and gradient to produce "intention", I don't think the hard part is studying the human mind to find out the "secret of intelligence"

The goal of AGI isn't silicon minds isomorphic to human minds at each level of interpretation. Just the existence of an intelligent system.


> If we had infinite compute today, what steps would you take to build AGI? Does anyone have any good ideas about that?

https://en.wikipedia.org/wiki/AIXI


Looking at the trajectory of things, AGI is not impossible. There you have it, I think thats all anyone can say about AGI till another breakthrough comes, what ever that may be


"Neural net that's really good at identifying objects, processing symbols and making decisions."

* Neural nets play Chess or Go better than us. They will soon play Mathematics better than us.

* They turn keywords in photo-realistic images in seconds, and will soon do the same for text. Literature and Arts down this path.

* They learn to play video games better than us, from Starcraft to Dota. Engineering down this path.

There is no hidden information. You just need to look at the width of the field. There is a credible challenge for all our intelligent capabilities.


I disagree that the field of Mathematics can be reduced to a definable game. Many recent breakthroughs have been a creative cross-pollination of mathematical fields...not that this will be totally off-limits to a sufficiently general AI. I didn't do much Math in college but my impression was that after you've learned the mechanics of calculus, algebra, etc., there's no obvious way to advance the field. Lots of "have people thought of things this way before" rather than crunch the numbers harder!!.

Anyone with more training want to chime in?


I have no trouble believing that neural nets can beat people at all of these things (including, eventually, driving). And that, in itself, is incredibly impressive and incredibly useful.

The question is how you get from that to AGI.


One thing that is still missing, I believe, is adaptability. Take chess.

Between rounds at an amateur chess tournament you will often find players passing the time playing a game commonly called Bughouse or Siamese chess. It's played by two teams of two players, using two chess sets and two clock. Let's call them team A, consisting of players Aw and Ab, and team B, consisting of player Bw and Bb.

The boards are set up so that Aw and Bb play on one board, and Ab and Bw on the other. They play a normal clocked game (with one major modification described below) on each board, and as soon as any player is checkmated, runs out of time on their clock, or resigns, the Bughouse game ends and that player's team loses.

The one major modification to the rules is that when a player captures something, that captured piece or pawn becomes available to their partner, who can later elect on any move to drop that on their board instead of making a move on the board.

E.g., if Aw captures a queen, Ab then has a black queen in reserve. Later, instead of making a move, Ab can place that black queen on Ab's board. The capture pieces must be kept where the other team can easily see them.

You can talk to your teammate during the game. This communication is very important because the state of your teammates game can greatly affect the value of your options. For example, I might be in a position to capture a queen for a knight, and just looking at my board that might be a great move. But it will result in my partner having a queen available to drop, and my partner's opponent having a knight to drop. Once on the board a queen is usually worth a lot more than a knight--but when in reserve it is the knight that is often the more deadly piece. So, I'll ask my teammate if queen for knight is OK. My teammate might say yes, or no, or something more complicated, like wait until his opponent moves, so that he can prepare for that incoming enemy knight. In the later case, if I've got less time on my clock than my teammate's opponent has, the latter might delay his move, trying to force me to either do the trade while it is sill his turn, or do something else which will let his teammate save his queen. This can get quite complicated.

OK, now imagine some kid, maybe 12 years old or so, who is at his first tournament, and is pretty good for his age, and had never played Bughouse. He's played a ton of regular chess at his school club and with friends, and with the computer.

A friend asks him to team up, quickly explains the rules, and they start playing Bughouse.

First few games, that kid is going to cause his team to lose a lot. He'll be making that queen for knight capture without checking the other board, shortly followed by his partner yelling "WHERE DID THAT KNIGHT COME FROM!? AAAAAARRRRRGGGHHHHH!!!".

The thing is, though, by the end of the day, after playing a few games of Bughouse between each round of the tournament, that kid will have figured out a fair amount of which parts of his knowledge of normal chess openings, endgames, tactics, general principles, etc., transfers as is to Bughouse, which parts need modification (and how to make those modifications), and which parts have to be thrown out.

To get his Bughouse proficiency up to about the same level as his regular chess proficiency will take orders of magnitude less games than it took for regular chess.

I don't think that is currently true for artificial neural nets. Training one for Bughouse would be as much work as training one for regular chess, even if you started with one that had been already trained for regular chess.


"While neural nets are good at organizing a world governed by simple rules, they are not proven good at interacting with other intelligent agents." This is an interesting point, for example squeezing information through a narrow channel forces a kind of understanding that brute forcing does not. I've stopped paying close attention to the field a year ago, but I have seen a handful of openai and deepmind papers taking some small steps down this route.


AGI is becoming like communism in that it seems theoretically possible, might usher in utopia or be really scary, and apparently intelligent people often believe in it. Along that line of thought one can imagine a scenario where some rogue military tech kills 100 million people, and the world moves to ban it, but a small cadre of intellectuals insist that "wasn't real AGI".


Until we can define intelligence, we cannot create artificial intelligence. We still do not know what intelligence actually is - bloviating academics clambering for subsidies to support their habits, notwithstanding.


> Until we can define intelligence, we cannot create artificial intelligence.

Until we define 'cake', we cannot create cake.


We have well and truly defined cake.

I mean, words have meaning don't they? Or, if not, then what's the fucking point?


Only because we made it so much.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: