"Yes. We want the model to be useful in real world applications."
"Then it is biased. The model is biased because data it was trained on was generated by people and people are biased. There is no such thing as an 'objective' model, just a model that is biased in a different way."
The left-wing bias of ChatGPT probably derives from intentional RLHF, not from the training text. If RLHF raters are woke, the fine-tuned model will be too.
"Yes. We want the model to be useful in real world applications."
"Then it is biased. The model is biased because data it was trained on was generated by people and people are biased. There is no such thing as an 'objective' model, just a model that is biased in a different way."