We need more alternatives to Reddit with different moderation practices that allow competing narratives to surface. I used to view Reddit as "what's popular on the internet?", but it's definitely become more "what's the Approved Narrative?" these days and though I do think there's plenty of danger content that calls for violence or is some other form of abuse, it's frustrating that there are many subs in which questioning the Approved Narrative will get censored.
I think content moderation could be done by the community flagging problematic content. Flagged content could be temporarily hidden, with the reason it was flagged. The OP could then write a short defense of the content, and then a "jury" of randomly selected users (who are over 18) could vote on whether to permanently remove the content or to allow it. Content couldn't be flagged again for the same reason if the jury had previously approved it. There would be a public record of content removal decisions, and perhaps even a restricted way to somehow request the removed content as a matter of public record, without surfacing it into public view.
Obviously, there would need to be some experimentation to get a better system. Which is my whole point. I wish there were more systems experimenting with this.
It's not just happening on reddit.The whole media industry works like this. If your point of view does not fit within the 'accepted bounds', it will not get the attention it should get.
Attention is one thing. What scares and frustrates me is censoring by banning accounts, removing post, and to a lesser degree echochamber downvoting hidding the content.
Any new resource will be either too small or it will contain only "approved content" or it
will be labeled white-supremacist or similar (if it is a free site, controversial/revolting content may be added by adversaries).
There is Tildes. It’s open source and non-profit, maintained by donations and has privacy by default. The community is heavily moderated and, while we do have liberty, we do not venerate free speech. There’s a great focus on civility. Most of our users are also on Reddit, and Tildes creator also created Reddit’s /u/automoderator.
Image posts do not exist and we usually prefer medium to long text and videos that are dense with content. It’s not entirely formal though, and there are avenues for conversation and personal expression.
The announcing post: https://blog.tildes.net/announcing-tildes
> The community is heavily moderated and, while we do have liberty, we do not venerate free speech. There’s a great focus on civility.
If someone made a civil argument with citations, leading to a conclusion you find immoral, but you didn't have the expertise to find errors, would you delete it or leave it up?
I don’t know I’m not a moderator. It’s hard to say without a concrete case, but if such immorality was also against Tildes principles than yes, probably.
Tildes is definitely not a place for free speech absolutists.
I haven't been around Tildes in a while, but they are doing the right thing when it comes to producing an alternative to Reddit - which is focusing on building a community rather than focusing on building a platform. After all, if you find Reddit to be terminally flawed, you will not solve those problems simply by duplicating Reddit.
The Tildes founders looked at Reddit, then what happened to Voat and identified the problem as a social one, not a technical one. The problem with Reddit and Voat lies between the keyboard and the chair. This is why they focus on the quality and good faith of the discourse, instead of metrics like growth or mass appeal. They aren't aiming to be a clearing house for all means of social interaction in the way that commercial social media platforms like Twitter, Facebook, and Reddit are.
I think content moderation could be done by the community flagging problematic content. Flagged content could be temporarily hidden, with the reason it was flagged. The OP could then write a short defense of the content, and then a "jury" of randomly selected users (who are over 18) could vote on whether to permanently remove the content or to allow it. Content couldn't be flagged again for the same reason if the jury had previously approved it. There would be a public record of content removal decisions, and perhaps even a restricted way to somehow request the removed content as a matter of public record, without surfacing it into public view.
Obviously, there would need to be some experimentation to get a better system. Which is my whole point. I wish there were more systems experimenting with this.