Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lemme start by saying this is objectively amazing. But I just really wouldn't call it a breakthrough.

We had one breakthrough a couple of years ago with GPT-3, where we found that neural networks / transformers + scale does wonders. Everything else has been a smooth continuous improvement. Compare today's announcement to Genie-2[1] release less than 1 year ago.

The speed is insane, but not surprising if you put in context on how fast AI is advancing. Again, nothing _new_. Just absurdly fast continuous progress.

[1] - https://deepmind.google/discover/blog/genie-2-a-large-scale-...



Wasn't the model winning gold in IMO result of a breakthrough? I doubt an stochastic parrot can solve math at IMO level...


Why wouldn't it? I still have to hear one convincing argument how our brain isn't working as a function of probable next best actions. When you look at amoebas work, and animals that are somewhere between them and us in intelligence, and then us, it is a very similar kind of progression we see with current LLMs, from almost no state of the world, to a pretty solid one.


As far as we know, it was "just" scale on depth (model capability) and breadth (multiple agents working at the same time).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: