Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“The technology they’re working on poses the gravest of dangers.”

Or, rather, its been actively marketed by Altman as doing so, as part of his effort to buy influence with government to restrict competition.

The simplest explanation for the conflict is that it doesn’t, and that OpenAI internally has allocated resources based on reality rather than it’s public propaganda, and the team whose internal turf was built around that propaganda left in a huff.



> Or, rather, its been actively marketed by Altman as doing so, as part of his effort to buy influence with government to restrict competition.

I mean, maybe sama is also doing so.

But you're ignoring the many people who have been warning of these dangers for years, some before OpenAI even existed, and many with no part of OpenAI and no monetary incentive.

To describe everything as "Altman is marketing this as dangerous" is to completely ignore the majority of researchers ringing alarm bells.


The world is full of dangers. AI danger has gotten some attention now - whether it turns out to be warranted beyond certain levels is still unclear.

While there has been some regulatory action it also feels there is some "moving on" from the more apocalyptic risk views towards more mundane risks and their management (i.e., more vanilla product and technology risk management).


> While there has been some regulatory action it also feels there is some "moving on" from the more apocalyptic risk views towards more mundane risks and their management

I don't think this is correct. It's more accurate to say that multiple different camps with different fears have risen up over the last few years, and ones worried about more "mundane" AI risks have gotten their view heard more.

I think both groups - "mundane" AI-safety and AI-existential-risk worriers - have both gotten more audience for their views as AI has proven more capable.


> it also feels there is some "moving on" from the more apocalyptic risk views towards more mundane risks and their management (i.e., more vanilla product and technology risk management).

No surprise, when the public reception to AI x-risk was mostly, "big tech scaremonging / regulatory capture; why not focus on Real AI Dangers Right Now, like bias or offensive language". Seemingly refocusing on the mundane may be the only way now to do something about the apocalyptic.


There is a possibility that the apocalyptic risk just isn't there in a meaningful way while mundane risks from new tools actually would need attention (but might not even need new regulation).


> technology they’re working on poses the gravest of dangers.

> Or rather, its been actively marketed by Altman as doing so

Both can be true at the same time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: