Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In any sufficiently large tech company, the "risk mitigation" leadership (legal, procurement, IT, etc) have to operate in a kind of Overton window that balance the risks they are hired to protect the corp from vs. the need or desire of the senior leadership to play fast and loose when they want or feel they need to.

We have recently seen what happens when companies err heavily on the side of risk mitigation for LLMs. Google recently launched AI products that were so heavily sanitized and over protected that they would incorrectly misinterpret simple tasks as possibly having dangerous or offensive implications.

They let the safety team run the show and the resulting product was universally hated for it. It's interesting now to see a company producing what is by most measures a class-leading product, only to have the tech community also hate them for not letting the safety team dominate product development.



> They let the safety team run the show and the resulting product was universally hated for it.

Yes though Google is extra cautious because the “Google” brand is worth over $100B a year in revenue, and they want to make sure nothing ever tarnishes their reputation. So it’s not clear to me that safety always means what it means for Gemini. OpenAI would still have a lot of flexibility to do safety their own way.


> they want to make sure nothing ever tarnishes their reputation

In the spirit of where you want the conversation to go, I see the point you want to make.

However, they are willing to tarnish their reputation just as long as it's not ruined. Their reputation on support is rubbish. Their reputation on YouTube's automated violation handling is rubbish. Their reputation on releasing a pet project long enough to just start to gain traction and then kill it is rubbish. Their reputation on allowing their search to be gamed by SEO and ad purchasers at the expense of smaller sites is rubbish.


Sure. But then in all the areas you mention, Google having rubbish reputation does not lead the masses to ask their respective representatives if Maybe Something Needs To Be Done About It. Closest here is the EU vs. Big Tech privacy and interoperability fight, which they see as a serious issue, but it doesn't quite have this magic outrage-inducing quality an AI insulting sensibilities of various groups of people would have.


With all things, it is not necessarily unhelpful to turn the knobs to the max in both directions to see if the dent position in the middle is the best default or not. So Google's attempt at max sanitizing wasn't good which is kind of expected, but good to have the test results to prove. Going opposite to no safety put in place can also have an assumed outcome. That one however will be much more likely to have negative consequences if truly allowed to run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: