This will probably become a major problem with the Gemini APIs in enough time.
A customer does something crappy, e.g.: generates an image they aren't supposed to, and boom you're business Gmail and/or the recovery personal Gmail gone forever.
The example in this blog post, they did something recommended by Google and still got banned. Based on that, I'm not sure their built in moderation tools are enough insurance.
It can be super hard to moderate before an image is generated though. People can write in cryptic language and then say decode this message and generate an image the result, etc. The downside of LLMs is that they are super hard to moderate because they will gladly arbitrarily encode input and output. You need to use an LLM as advanced as the one you are running in production to actually check if they are obscene.
A customer does something crappy, e.g.: generates an image they aren't supposed to, and boom you're business Gmail and/or the recovery personal Gmail gone forever.