The main use of this I could see is transparency. I imagine you could test prompts and increase your confidence the released system prompts are actually the ones in use.
Also, assuming they continue to release their old models, I suppose this could be used to get more insight on what caused the MechaHitler incident.
Grok is bad for the world. xAI’s data center is producing a huge amount of pollution in Memphis [0]. Additionally, Elon Musk’s attempts to create an AI that is not “woke” have led to it spewing far-right conspiracy theories [1], although generally for short periods of time before fixes are released.
It might not be woke but it's just as censored as OpenAI.
OpenAI is "Don't say boobs, some conservative investor might take offence!"
Grok is: "Don't say gay, but Hitler is okay"
Both are pretty crap in daily use. Sexuality is a part of life, if you use an AI personally you can't really do without. For work it's ok and that's probably why GPT-5 is so corporate. Useless for personal use.
Grok is useless for me as I'm very pro lgbt and anti nazi.
So yeah what do I use now? Llama3.1 abliterated.
Unfortunately a lot of newer models like phi are trained on synthetic data which is much harder to uncensor because they've never seen any data their makers consider questionable. And those things are very polarised as is American society.
What we need here in Europe is a different mix. Sexual topics (18+ obviously) yes, discrimination no, lgbt yes, fascism no. Maybe mistral can deliver that.
Mistral models are largely along the likes of what you were asking for. However, Grok (any version) absolutely is not a “don’t say gay” model, it talks about sexuality of all forms quite openly and fairly and is happy to produce creative content of any level of explicitness about these topics. It’s the least censored unmodified model I’ve encountered on any topic. People dismiss Grok as a Nazi model based on Musk’s politics without using it themselves.
Also, assuming they continue to release their old models, I suppose this could be used to get more insight on what caused the MechaHitler incident.
Grok is bad for the world. xAI’s data center is producing a huge amount of pollution in Memphis [0]. Additionally, Elon Musk’s attempts to create an AI that is not “woke” have led to it spewing far-right conspiracy theories [1], although generally for short periods of time before fixes are released.
[0]: https://www.selc.org/news/resistance-against-elon-musks-xai-...
[1]: https://www.theguardian.com/technology/2025/may/14/elon-musk...