Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What little consolation I had that maybe the experts of AI who continued to insist we needn’t worry too much know better, evaporates with this news. I am reminded that even a year back the experts were absolutely confident (as is mentioned in this article, including Hinton) that really intelligent AI is 30 years ahead. Anyone still trying to argue that we needn’t worry about AI, better have a mathematical proof of that assertion.


Most still believe that "really intelligent AI" is still a long way off, from what I have seen. Many have started to believe there can be a lot of harm caused by the systems well before then, however.


It depends what you mean by "intelligence". For any given definition so far, when the AI can do that, we have changed our minds about if that counts.

So, when I was a kid, "intelligence" meant being good at chess and maths, having a good memory, knowing a lot of trivia, and being able to speak a second language.

On all of these things except language, a raspberry pi and a cheap memory card beats essentially all humans.

For language, even a dictionary lookup — where "hydraulic ram" might become "water sheep" — will beat many, but I'm not sure it would be a majority.

But that's ok, we've changed what we meant by "intelligent" since then.


>On all of these things except language, a raspberry pi and a cheap memory card beats essentially all humans.

llama.cpp runs quite fast on a raspberry pi 8GB, beating most humans at language.


Wow, that's surprising and impressive. Thanks for updating me!


From the article: “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”


The state of the art in AI suddenly appears to be a decade ahead of my expectations of only a couple years ago, but whether AI powerful enough to warrant actionable concern is here now or decades out doesn't really change much. Personally I was just as concerned about the risks of AI a decade ago as I am now. A decade ago one could see strong incentives to improve AI, and that persistent efforts tended to yield results. While there is much to debate about the particulars, or the timeline, it was reasonable then to assume the state of the art would continue to improve, and it still is.


The experts have been confident that AI is 30 years out for about 70 years now.


My introduction to the field of "AI" was articles bemoaning the "AI Winter" and wondering if the idea could survive, as an academic pursuit, because of the over hype and failures from the 1970s.


Excited tech bloggers/columnists != Experts.


What exactly are people proposing? We bury our head in the sand and ban the development of neural networks?

Sure, we can all agree to be worried about it, but I don’t see what drumming up anxiety accomplishes.

The world changing is nothing new.


Government restricts public release of GPT-like research any further and starts treating it like the nuclear-esque risk that it is.


Worry not, there will still be mouthbreathers who insist everything will be bread and roses... and there will still be mouthbreathers insisting that as they wander the ashes of civilization.


I am not worried about AI. I am more worried about those who use it and those who are building it and mostly those who control it. This is true for all technologies.


So you are worried about it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: