Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the leaps required are:

- Giving gpt4 short term memory. Right now it has working memory (context size, activations) and long term memory (training data). But no short term memory.

- Give it the ability to have internal thoughts before speaking out loud

- Add reinforcement learning. If you want to write code, it helps if you can try things out with a real compiler and get feedback. That’s how humans do it.

I think GPT4 + these properties would be significantly more capable. And honestly I don’t see a good reason for any of these problems to take decades to solve. We’ve already mastered RL in other domains (eg alphazero).

In the meantime, an insane amount of money is being poured into making the next generation of AI chips. Even if nothing changes algorithmically, we’ll have significantly bigger, better, cheaper models in a few years that will put gpt4 to shame.

The other hard thing is that while transformers weren’t a gigantic leap, nobody - not even the researchers involved - predicted how powerful gpt3 or gpt4 would be. We just don’t know yet what other small algorithmic leaps might make the system another order of magnitude smarter. And one more order of magnitude of intelligence will probably be enough to make gpt5 smarter than most humans.

I don’t think agi will be that far away.



You're guessing, which is a problem, and gives you license to conclude that:

> I don’t think agi will be that far away.

To,

> Give it the ability to have internal thoughts before speaking out loud

, you have no idea what that means technically because no one knows how internal thinking could ever be mapped to compute. No one does, so it's ok to not know, but then don't use it as a prior to guess.

Some have maybe seen this before, if you haven't I'll say it again: compute ≠ intelligence. That LLMs offer convincing phantasms of reasoning is what gives the outside observer the idea they are intelligent.

The only intelligence we understand is human intelligence. That's important to emphasize because any idea of an autonomous intelligence rests on our flavor of human intelligence which is fuelled by desire and ambition. Ergo, any machine intelligence we imagine of course is Skynet.

Somebody in some other conversation here pointed out the paperclip optimizer as a counterpoint but no, Bostrom makes tons of assumptions on his way to the optimizer, like that an optimizer would optimize humans out of the equation to protect its ability to produce paperclips. There's so many leaps of logic here which all assume very human ideas of intelligence like the optimizer must protect itself from a threat.


At the end of the day intelligence guesses. If it doesn't guess it's just an algorithm.

Now, if you step beyond your biases that we cannot make intelligent machines and instead imagine you have two intelligent machines that are pitted against each other. These could be anything from stock trading applications to machines of war with physical manifestations. If you want either of these things to work they need to be protected against threats, both digital or kinetic. To think AGI systems will be left to flounder around like babies is very strange thinking indeed.


The drive to protect ourselves from threats occurred via evolution, but our abhorrence at killing people to achieve our goals also came from evolution. If you assume the first is implausible, how can you keep the second?

Everything we assume about intelligence is based on what humans are like because of evolution. Figuring out what intelligence might be like without those evolutionary pressures is really hard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: