Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And if that’s the definition of AGI we’re going by, we’re in no danger of it murdering us all. Imagine thinkjng GPT4 was somehow capable of really any harm at all.

It’s also just incredible failure of risk assessment. Global warming is the real and pressing threat, and it’s here right now. Not sentient murderous AI. But it’s easier to dream about a threat that doesn’t exist than one that does because you can still imagine a perfect scenario in which you prevent it. Whoops.



> Imagine thinkjng GPT4 was somehow capable of really any harm at all.

There’s plenty of new ways you could use chatgpt to mess with society. For example, apparently you only need 8% of people to be talking about something for it to seem like a mass movement. It would be pretty easy for a malicious actor to use LLMs to flood the internet with some new - but fake - story or motivated point of view and kickstart a “mass movement” that way. Or use something like that to heavily influence politics. That might already be happening. We have no idea.


And no amount of well intended AI ethics or prompt denial would stop this sort of malevolent activity.


It would also be easy for a benevolent actor to counter-flood etc.


Sure; but multiple AI generated "mass movements" don't cancel each other out.


Who knows? We never had a world with a huge supply of mass movement attachment points.


>But it’s easier to dream about a threat that doesn’t exist than one that does because you can still imagine a perfect scenario in which you prevent it

Science fiction invents scenarios that once created cannot be stopped quite often. The "don't create a black hole on earth unless you want to be in the black hole" effect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: