Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
cwillu
on June 19, 2024
|
parent
|
context
|
favorite
| on:
Safe Superintelligence Inc.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
https://www.safe.ai/work/statement-on-ai-risk
, signed by Ilya Sutskever among others.
joshuahaglund
on June 19, 2024
[–]
I clicked, hoping that "human extinction" was just the worst thing they were against. But that's the only thing. That leaves open a whole lot of bad stuff that they're OK with AI doing (as long as it doesn't kill literally everyone).
cwillu
on June 19, 2024
|
parent
[–]
That's like saying a bus driver is okay with violence on his bus because he has signed a statement against dangerous driving.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
https://www.safe.ai/work/statement-on-ai-risk, signed by Ilya Sutskever among others.