I saw examples of people using it to generate scientific-sounding fake studies like: the benefits of eating glass, or promoting antisemitism.
That being said, I am very partial to the AI researchers here who feel like their cool demo has to be taken down because some people were misusing it. It's an unfair high standard they're holding AI demos to, compared with other technologies. It's analogous to asking Alexander Graham Bell to shut down an early telephone prototype because some jerks were using it to discuss antisemitic conspiracies.
I agree that the model does make mistakes. Your examples sound realistic, and I hope we make more progress to improve on models that can propagate stereotypes and similarly negative aspects that can arise in training data. I had meant to criticize the journalism, versus saying the mistakes don't exist.
That being said, I am very partial to the AI researchers here who feel like their cool demo has to be taken down because some people were misusing it. It's an unfair high standard they're holding AI demos to, compared with other technologies. It's analogous to asking Alexander Graham Bell to shut down an early telephone prototype because some jerks were using it to discuss antisemitic conspiracies.