I think the only solution is a distributed and decentralized web.
Distributed hosting of static content is a sorta-solved problem. But curating, linking and discoverability (which require mutating content) is a lot harder due to the trust anchor problem.
Your suggestion exists and has a name: bitchute. I will paste here what they have for "About" at the end of their main page:
BitChute is a peer to peer content sharing platform.
Our mission is to put people and free speech first.
It is free to join, create and upload your own content to share with others.
Feel free to read more about it in their FAQ. I really want to stop using YouTube and use this instead.
The big challenge with this is to solve the problem that almost everyone has a "one step too far" when it comes to what type of content we are willing to tolerate and/or what type of content we may get in trouble for hosting even if it is not intentional.
That makes it tricky to for solutions that "put people and free speech first" to succeed, because they've basically painted a giant target on themselves, and it easily makes even a lot of people that sympathise in principle worried about the bits and pieces that steps over their personal line.
Figuring out a reasonable solution to this, I think, will be essential to get more widespread adoptions of platforms like these.
I think the problem is the expectation of people that someone else do the filtering for them. I.e. "I don't want to see this kind of content" leads to "someone else should remove it from all the sites I visit". Which obviously leads to conflicting requirements once you have more than one person and those people disagree on what they want to see and don't want to see.
The only reasonable solution is to host everything, modulo requirements by law, and give users the tools to locally filter out content en masse.
In a decentralized system you also skip the law requirements since you cannot enforce multiple incompatible jurisdictions at the platform level, individual users will be responsible for enforcing it on their own nodes, similar how all you can do when accidentally encountering child porn is to clear your cache.
The problem on these distributed platforms is not filtering what people see, but filtering what people host or allow to transit their network connections.
> In a decentralized system you also skip the law requirements since you cannot enforce multiple incompatible jurisdictions at the platform level, individual users will be responsible for enforcing it on their own nodes, similar how all you can do when accidentally encountering child porn is to clear your cache.
But that's the thing: You don't skip it. You spread it to every user. They both have to deal with whether or not they are willing to host the material and whether or not it is even legal for them.
How many of us sympathise with the idea of running a Tor exit node, for example, but avoid it because we're worried about the consequences?
These platforms will always struggle with this unless they provide ways for people to feel secure that the content that is hosted on their machines is content they don't find too offensive, and/or that traffic that transit their networks is not content they find too offensive.
Consider e.g. darknet efforts like cjdns which are basically worthless because their solution to this was to require people find "neighbours" they can convince to let them connect. Which basically opens the door to campaigns to have groups you disapprove of disconnected by harassing their neighbours and their neighbours neighbours, just the same as you can go to network providers on the "open" internet.
First of all, not all p2p networks operate like Tor. For example bittorrent and ipfs only host content you look at. So hosters could largely self-select the content they replicate.
Secondly, there are several tiers of content. a) stuff that is illegal to host b) stuff that is not illegal but that you find so objectionable that you don't even want to host it c) stuff that you don't like but doesn't bother you too much d) stuff you actually want to look at.
I posit that a) and b) are fairly small fractions and the self-selection mechanism of "things that I looked at" will reduce that fraction even further.
And even if you are on a network where you randomly host content you never looked at encryption can provide you some peace of mind (of the obliviousness kind) because you cannot possibly know or expected to know what content you're hosting. Add onion routing and the person who hosts something can't even be identified. If Viewer A requests something (blinded) through Relay B from Hoster C then B cannot know what they're forwarding and C cannot know what they're hosting. If neither you nor others can know what flows through or is stored on your node it would be difficult to mount pressure against anyone to disconnect.
For the illegal content, especially in oppressive environments, you could install a Voluntary Compliance(tm) government blocklist on public-facing nodes and still opt to run an internal node in your network that uses encrypted connections to retrieve things hosted in other countries you're not supposed to see.
----
Anyway, back to filtering for decentralized content hosting. I think once you have a network it is a matter of managing expectations. You can't make content magically disappear. Platforms like youtube, twitter, facebook etc. have raised the false expectation that you can actually make things go away by appealing to The Authority and it will be forever gone. In reality things continue to exist, they just move into some more remote corners of the net. Once expectations become more aligned with reality again and people know they can only avoid looking at content but not make it non-existent things boil down to being able to filter things out at the local level.
> And even if you are on a network where you randomly host content you never looked at encryption can provide you some peace of mind ... If neither you nor others can know what flows through or is stored on your node it would be difficult to mount pressure against anyone to disconnect.
I think you misunderstand the objection. Yes, encryption can mean you cannot be persecuted for "hosting"/"transmitting" some objectionable stuff, since you can prove that you had no idea (at least that's the theory).
However some want to be able to "vote with their wallets" (well "vote with their bandwidth"). They don't want to assist in the transmission of some content, they want that content to be hard to find, and slow and unreliable. They have the right to freedom of association and don't want to associate with those groups. Encryption cannot guaranatee that I won't help transmit $CONTENT.
> First of all, not all p2p networks operate like Tor. For example bittorrent and ipfs only host content you look at. So hosters could largely self-select the content they replicate.
I'm aware of that, but they you suffer the problem of people wanting deniability.
> Secondly, there are several tiers of content. a) stuff that is illegal to host b) stuff that is not illegal but that you find so objectionable that you don't even want to host it c) stuff that you don't like but doesn't bother you too much d) stuff you actually want to look at. I posit that a) and b) are fairly small fractions and the self-selection mechanism of "things that I looked at" will reduce that fraction even further.
That's true, but those sets pretty much only need to be non-zero for it to threaten peoples willingness to use such a network.
Further, unless there is stuff in a), and stuff that fall into b) for other people that you want to look at, such a network has little value to most of us, even though we might recognise that it is good if such a network exists for the sake of others.
This creates very little incentive for most to actively support such systems unless such systems also deals with content that we are likely to worry about hosting/transmitting.
> For the illegal content, especially in oppressive environments, you could install a Voluntary Compliance(tm) government blocklist on public-facing nodes and still opt to run an internal node in your network that uses encrypted connections to retrieve things hosted in other countries you're not supposed to see.
That's an interesting thought. Turning the tables, and saying "just tell us what to block". That's the type of ideas that I think it is necessary to explore. It needs to be extremely trouble-free to run these types of things, because to most the tangible value of accessing censored content is small, and the intangible value of supporting liberty is too intangible for most.
> Anyway, back to filtering for decentralized content hosting. I think once you have a network it is a matter of managing expectations. You can't make content magically disappear. Platforms like youtube, twitter, facebook etc. have raised the false expectation that you can actually make things go away by appealing to The Authority and it will be forever gone. In reality things continue to exist, they just move into some more remote corners of the net. Once expectations become more aligned with reality again and people know they can only avoid looking at content but not make it non-existent things boil down to being able to filter things out at the local level.
This, on the other hand, I fear is a generational thing. As in, I think it will take at least a generation or two, probably more. The web has been around for a generation now, and in many respects the expectations have gone the other way - people have increasingly come to be aware of censorship as something possible, and are largely not aware of the extent of the darker corners of the net.
Centralisation and monitoring appears to be of little concern to most regular people. People increasingly opt for renting access to content collection where there is no guarantee content will stay around instead of ensuring they own a copy, and so keep making themselves more vulnerable, because to most censorship is something that happens to other people.
And this both means that most people see little reason to care about a fix to this problem and have an attitude that give them little reason to be supportive of a decentralised solution that suddenly raises new issues to them.
Note that I strongly believe we need to work on decentralised solutions. But I worry that no such solution will gain much traction unless we deal with the above issues in ways that removes the friction for people of worrying about legality and morality, and that provides very tangible benefits that gives them a reason to want it even if they don't perceive a strong need on their own.
E.g. Bittorrent gained the traction it has in two ways: through copyright infringement and separately by promising a lower cost way of distributing large legitimate content fast enough. We need that kind of thinking for other types of decentralised content: At least one major feature that is morally inoffensive and legal that attracts people who don't care if Facebook tracks them or Youtube bans a video or ten, to build the userbase where sufficient pools of people can form for various type of content to be maintained in a decentralised but "filtered" manner. Not least because a lot of moral concerns disappear when people feel they have a justification for ignoring them ("it's not that bad, and I need X")
I genuinely believe that getting this type of thing to succeed is more about hacking human psychology than about technical solutions.
Maybe it needs a two-pronged attack - e.g. part of the problem is that the net is very much hubs and spokes, so capacity very much favours centralisation. Maybe what we need is to work on hardware/software that makes meshes more practical - at least on a local basis. Even if you explicitly throw overboard "blind" connection sharing, perhaps you could sell people on boxes that shares their connections in ways that explicitly allows tracking (so they can reliably pass the blame for abuse) to increase reliability and speed, coupled with content-addressed caching on a neighbourhood basis.
Imagine routers that establish VPN to endpoints and bonds your connection with your neighbours, and establishes a local content cache of whitelisted non-offensive sites (to prevent a risk of leaking site preferences in what would likely be tiny little pools).
Give people a reason to talk up solutions that flattens the hub/spoke, and use that as a springboard to start to make decentralisation of the actual hosting more attractive.
> But that's the thing: You don't skip it. You spread it to every user. They both have to deal with whether or not they are willing to host the material and whether or not it is even legal for them.
> How many of us sympathise with the idea of running a Tor exit node, for example, but avoid it because we're worried about the consequences?
Tor isn't the best example, because exits don't cache anything. So mainly, exit operators get complaints. And the exit IPs end up on block lists. Operators don't typically get prosecuted. Maybe they get raided, however, so it's prudent to run exit relays on hosted servers.
Freenet is the better example. The basic design has nodes relaying stuff for other nodes. In an extremely obscure and randomized way. Also, keys are needed to access stored material.
However, nodes see IPs of all their peers. Investigators have used modified clients to identify nodes that handle illegal material. So users get busted. There is "plausible deniability". But it's not so plausible when prosecutors have experts that bullshit juries. So users typically plea bargain. Or, if they use encrypted storage, they get pressed for passwords. Like that guy in Philadelphia.
It doesn't matter if operators get prosecuted or not. What matters if is people in general see running exit nodes as somewhat risky. Unless there is a reasonable perceived payoff, even a very minor perceived cost will be enough to stunt the growth of such a network severely.
While I don't disagree with your argument per se (not sure if I quite agree either, though), note that avoiding a decentralised platform because of being "worried about the consequences" is not necessarily the same thing as worrying that "the content (..) is content they don't find too offensive".
The first includes both legal and moral considerations, the second only moral ones.
My consideration of whether to share, part of the time, some slice of my home Internet connection bandwidth as a Tor exit node is almost entirely a legal one (I admit that time/effort may play a role too). I'd consider the moral aspect too, but I wouldn't have to think long to decide that (for me personally) the trade-offs are worth it (I could explain why and how, but I don't want to derail the discussion in that direction).
In fact I'd argue this goes for anyone, in some sense. Even if their underlying reasons align with the legal considerations (and thus not run one), it's a moral judgement. (in the worst case, there exist people who equate moral judgement with legality)
> I.e. "I don't want to see this kind of content" leads to "someone else should remove it from all the sites I visit".
I don’t think that’s always the mindset. Isn’t it reasonable for people to have the mindset of “I want to go to sites that don’t have content that I find objectionable”? This way websites can decide which group of people they want to cater to.
The context of the discussion is large websites acting as platforms. Their users are bound to have conflicting views about what's "objectionable". So when the moderation mechanism is deletion instead of letting users just filter then the website has to preferentially treat one group instead of being a platform for everyone.
But the deeper issue is that done people don't want certain content to even exist, and won't be satisfied with just a filter (even thought they can't tell the difference between filter and deletion).
I don't think Youtube censors extremist content because "I" don't want to see it. It's because "I" don't want anyone else to see it! There's no use me filtering my own videos if my goal is to limit what other people see.
This is at the heart of the issue: should platforms bow down and allow some users to dictate what others are allowed to see? Or should a platform remain neutral against anything (which means potential back lash)?
Warning: it has this channel https://www.bitchute.com/channel/whitepower/ which is full of anti-semitic nazi stuff. It's visible on the front page listed above. You may or may not want to visit that link.
Channels aren't open to everyone, so it looks like they have manually allowed that?
I think the only solution is a distributed and decentralized web.
I love the idea but one problem: who pays for it? It's a special case of the co-operative vs corporation problem. Without an individual's starting capital, how do you get off the ground?
Distributed hosting of static content is a sorta-solved problem. But curating, linking and discoverability (which require mutating content) is a lot harder due to the trust anchor problem.