Article examines the behavior of ChatGPT. It provides no information or claims about the OpenAI staff.
Amazon's scrapped resume AI[1] had a gender bias. Do you think that those developers 1) had a gender bias and 2) did the work to inject that bias into their AI? Do you disagree with the [reported] conclusion that the bias was due to the training data?
If you think that the bias in Amazon's AI was not the result of deliberate human action, what leads you to think that the bias of ChatGPT was the result of deliberate human action?
Not at all, you just have to think they are disadvantaged in society, as the article says: "is more likely to classify as hateful negative comments about demographic groups that have been deemed as disadvantaged."