I agree the topic is complex and fraught. I have opinions but they're not strongly held or informed by debate, and unfortunately I don't think even a generally well-moderated forum like HN is the best place to have that debate (plus I don't have time).
However, in this case I wasn't trying to argue how moderation should work, I'm trying to examine a hypothesis on how it does work. The additional mapping of statistical crime data I mentioned would help measure whether that hypothesis is correct.
As I said in a sibling comment, my wording was imprecise. When I said "it makes sense to use crime stats to weight moderation strength" I really meant "If OpenAI was trying to avoid additional harm to already targeted groups then to verify that it would make sense to..." So it was the results of the post's research that I suggested should be mapped against statistics, not the results of ChatGPT's output.
In answer to your sincere question though, and at the risk of going down a rabbit hole, I'll say this. Censorship is bad, but I can see why some may be required (see "yelling fire in a movie theater," libel, direct threats of violence, etc.). The question then becomes, how do you minimize censorship while also attempting to avoid direct harm?
In short, asymmetric filtering could potentially limit the amount of censorship by focusing it on groups that are actively attacking other groups.
However, in this case I wasn't trying to argue how moderation should work, I'm trying to examine a hypothesis on how it does work. The additional mapping of statistical crime data I mentioned would help measure whether that hypothesis is correct.
As I said in a sibling comment, my wording was imprecise. When I said "it makes sense to use crime stats to weight moderation strength" I really meant "If OpenAI was trying to avoid additional harm to already targeted groups then to verify that it would make sense to..." So it was the results of the post's research that I suggested should be mapped against statistics, not the results of ChatGPT's output.
In answer to your sincere question though, and at the risk of going down a rabbit hole, I'll say this. Censorship is bad, but I can see why some may be required (see "yelling fire in a movie theater," libel, direct threats of violence, etc.). The question then becomes, how do you minimize censorship while also attempting to avoid direct harm?
In short, asymmetric filtering could potentially limit the amount of censorship by focusing it on groups that are actively attacking other groups.