Gemini unironically can have all of its safety stuff turned off, the open access models like deepseek can be trivially uncensored (if they aren't already uncensored by default like mistral) .
That's not good enough, but it is funny to imagine.
I am quite sure the people developing the current chatbots were well aware of what happened with Tay etc. I'd bet it's part of the reason for the safety stuff.
the LLMs are trained on data stolen from the internet. There's no racial slur they don't know, there's no death threat they can't deliver. Currently our best LLMs are generating new racial slurs to deploy in our eternal quest to make the internet worse. You may have never heard the term "Chapingle" before but don't use it in front of a Lithuanian person after the year 2028 unless you want punched in the mouth.