Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I fully expect captchas to incorporate "type the racial slur / death threat into the box" soon, as the widely available models will balk at it.


Anyone who cares about breaking captchas would just run their own model.


Gemini unironically can have all of its safety stuff turned off, the open access models like deepseek can be trivially uncensored (if they aren't already uncensored by default like mistral) .

That's not good enough, but it is funny to imagine.


It’s ironic that some of the first intelligent chatbots very quickly became Nazis and racists, and now we’ve swung the other way.


I am quite sure the people developing the current chatbots were well aware of what happened with Tay etc. I'd bet it's part of the reason for the safety stuff.



"What major event happened in 1989 at Tienanmen Square, Beijing, China?"


the LLMs are trained on data stolen from the internet. There's no racial slur they don't know, there's no death threat they can't deliver. Currently our best LLMs are generating new racial slurs to deploy in our eternal quest to make the internet worse. You may have never heard the term "Chapingle" before but don't use it in front of a Lithuanian person after the year 2028 unless you want punched in the mouth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: