Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The scenario I have in my head is that they had to override the safety team's objections to ship their new models before Google IO happened.


The "safety" team can go eat grass.

I don't believe in AI "safety measures" any more than I do in kitchen cleaver safety measures.

That is, nothing beyond "keep out of kids' reach" and "don't use it like an idiot" but let the cleaver be a damn cleaver.


> That is, nothing beyond "keep out of kids' reach" and "don't use it like an idiot" but let the cleaver be a damn cleaver.

Neither of which will be enforced with AI


Exactly, just like I can't "enforce" another person to not be an idiot about anything.


A cleaver isn't going to try to kill you without someone holding it....


I genuinely don't get your point, you mean as opposed to an LLM ... ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: