Good question. For a start, don't pretend that Nazi soldiers were a multiracial bunch. And don't do whatever Google did to generate clearly-incorrect output like this.
Sure. That's a major over-correction. But there are existing and known biases in data sources that need to be accounted for - you don't want to further perpetuate those.
I think it's obvious that Google went to a ridiculous extreme in the other direction, but there does need to be some amount of work done here. For example, we repeatedly have seen that just changing the name on a resume to something more European sounding can have significant impact on callback rates when applying to a job, and if you trained a model to screen resumes based on your own resume result data, this bias could be picked up by the model. That's the sort of situation these are meant to correct for.
That multiracial Nazi soldiers thing wasn't baked into the model: it was a prompt engineering mistake, part of the instructions that a product team were feeding into the Gemini consumer product to tell it how to interact with the image generation tool.
You keep using that word. I don't think it means what you think it means.
But seriously; a "mistake" is usually something that cannot be foreseen by a group of people reasonably talented in the state of the art.
This product release was so far from a "mistake", that it isn't funny. It was spectacularly well tested, found to be operating within design parameters, and was released to great fanfare.
They expressed delight in their product, and actually seemed surprised that there was a backlash by the great benighted unwashed masses of their lessers, who clearly couldn't be expected to understand the elevated insights being produced by their creation!
So: not a "mistake". Institutional Bias, baked into a model. Remember: a system's purpose is what is does, not what you think it is supposed to do.
As someone who works either these models as an engineer, I think it's important to understand that a feature implemented as part of the user-facing UI to a model is irrelevant to the work I do with that model via an API.
There's a difference between inserting bias and allowing a real-world pattern to exist in AI. There may be reasons to dislike these real-world patterns, but that doesn't mean that allowing them to exist in AI is inserting a bias.
For example, if you ask AI to write a realistic story about an NBA team, and it comes back with a team with stereotypically Asian named players, that would be unrealistic. If it came back with a team with stereotypically Black named players, that would be fine. Does it reflect a real-world pattern? Yes. But not changing the algorithm to generate diverse names isn't inserting bias. It's letting AI reflect the real world, as it exists.