None of these solutions offer any advice on the business model, which means they're dead on arrival.
For a new social media to be "fixed", the company behind it needs to have the incentives and business model to steer towards more healthy behavior. None of these cover that.
We've had plenty of social media startups try to fix social, like Path, but ultimately they've failed bc a "fixed social media model" almost seems antithetical to "a good business."
I'm all for Pinterest and I think it's relatively healthier than the rest, but they barely have a functional business...
One of the core problems with "social media" in 2021 is the default assumption that it should be monetised and a business.
There is a difference between providing infrastructure for social connection and farming/harvestings/leveraging the information gathered from what is constructed on that infrastructure.
The current situation is akin to if AT&T had listened in on all phone conversations (assuming no technical barriers) and sold ads based on what you said to friends and family. Before you made a call you would need to listen to a personalised ad on your reciever.
Before facebook or myspace or twitter there were blogs/personal websites, forums, email and instant messaging. All these options worked for expressing one's self online and none of them were under a centralised, for-profit entity.
Wikipedia is a good example of a decentralised social network although it appears to be increasingly infected with the reddit/twitter virus.
I run a Mastodon instance for myself and some friends. It's got about 250 accounts total. The hosting costs are about evenly split between myself and friends willing to throw me some money via Patreon. It is not making me money and I do not want it to make me money because then it would probably require a lot of my time to deal with moderating it. I don't want to have to do that or hire anyone to do it.
I run it to have a space that's free from the bullshit corporate social media does in the name of "increasing engagement" to make more profit off of ad views. I like that I can visit my instance, catch up on the local timeline and the stuff I'm personally following, and then be done with it for most of the day, versus opening up a corporate site that's had tons of very smart people sink a lot of time and effort into figuring out how to keep me there for hours on end, whether or not I'm better off for it.
I don't think the monetization and business aspect is what makes social media so bad. It's more that they're going about it in the wrong way.
You make the comparison to AT&T. They charge a lot of money for the service that they provide, and as you mention, they don't do all the shady things that social platforms do.
Most other great tech products are provided by businesses. Apple is the prime example of a company that is both (a) massively successful and (b) not hostile to their users' privacy.
The value is in the network infrastructure, not the so-called "tech companies" operating on it as middlemen, spying on internet users as a so-called "business".
We as users already pay for that infrastructure. We pay for an internet connection. The network infrastructure is already monetised.
That network infrastructure is worth something even if every single tech company ceased to exist. Would you stop paying for internet access just because some website went offline.
The large websites many people are using are 100% user-generated content, much of it being in the form of human communication. That so-called content is not going to disappear if tech companies stop spying for money. (We often hear of tech company users losing their content when the companies discontinue so-called "service". Not surprising since the companies serve
advertisers, not users.) The tech companies are not the content creators. Generally, users do not pay tech companies for service and tech companies do not pay users for content. Tech companies are middlemen.
"Tech" companies want you to believe "This is how it must be. There is no other way." They want you to believe they are "the internet". But the menu is not the meal. The map is not the territory. The middleman has no inherent value. He must leech to survive.
People online often comment that "Facebook" (not to mention many other tech company websites) could be written, for free, in a weekend, and what makes Facebook so popular is not the people who write PHP for Zuckerberg, it is the network effects, the fortuitous timing and luck that Facebook has enjoyed to become so large.
What has been done to date can be replicated by others, but Big Tech wants people to accept another "default assumption": massive scale, all under single ownership, is a requirement.
AT&T was broken up and we did not lose telephone service. Around the world, customers of one telephone company are able to call customers of other telephone companies.
Originally wikipedia did not condone advocacy or propaganda now it does as long as it can be attributed as a "fact" because it has been published by a "reliable source." Who decides what is a reliable source? The same sort of clique that moderates /r/worldnews or /r/politics. Reddit is an Orwellian cancer.
These are the same policies Wikipedia has had for over a decade, with only relatively minor changes. Are you saying they're being interpreted differently? Could you give an example?
Relatively minor changes have huge effect on the content of articles. The linked article also addresses how Wikipedia has banned conservative sources from Wikipedia including Fox News, the Daily Mail, and the New York Post. "In short, and with few exceptions, only globalist, progressive mainstream sources—and sources friendly to globalist progressivism—are permitted."
Every claim in a Wikipedia article must be accompanied with a source. Claims that are only covered by conservative media and not covered at all by mainstream (liberal) media, cannot be referenced as a source in a Wikipedia article. This leads to conservative viewpoints being removed from articles. Which directly causes articles to become biased towards mainstream viewpoints.
Exactly. Policy is implemented by people. Eventually those who put extraordinary amount of time online will takeover editor/moderator positions and win editor wars. Most of those people are definitely not real experts in the field. Because real world experts wouldn't have time / motivation to engage in online fight.
For one thing all media sources should be on equal footing. Let the community decide what is bullshit. It is ridiculous that BBC is considered a reliable source but RT is not. Both have a specific point of view that they promote, reliably, Wikipedia is not supposed to have a point of view. If BBC is allowed but RT is banned then Wikipedia POV will tend toward BBC simply because a contrasting POV is censored. Same would hold for allowing an Israeli newspaper but banning an Iranian one as an unacceptable source. All media outlets should be on equal footing.
No I disagree. They're state propaganda it's just the current journalist profession is so activist left they won't do anything for a right leaning government, but they will sue over a plainly fairly use video before an election to try to create chaos. They're too entrenched in their brand of leftist ideology to be used by the right.
They're a state backed propaganda machine pushing the state's narrative, but resisting any other narrative when the current opposition in power because they've been entirely staffed by activist journalists.
Just as Al Jazeera is Qatari state propaganda, BBC is British state propaganda. You can pull the wool over your eyes and disagree, but they're all state backed narrative machines.
Bellingcat is obviously an MI6 PR outlet. To believe otherwise is ludicrous. Some stay at home dad who used to sell women's underwear has an independent insight on world class covert activities? Give me a break!
Bellingcat was started by a Something Awful poster/moderator called Brown Moses, you can look at his old posts to see his evolution from discussion board member to journalist by reading his old posts.
We've banned this account for using HN primarily for political/ideological battle. That's against the site guidelines because it destroys this place for its intended purpose—and for that reason, we ban accounts that do it, regardless of which politics/ideology they're battling for.
Please don't create accounts to break HN's rules with. Single-purpose accounts are not cool in general, because pre-existing agendas aren't sympatico with curiosity, which is the core value of this site.
Blogs, personal websites, forums, email, and IM all still exist and are still large platforms.
It’s clear based on user behavior that users like the experience that centralized, for-profit social media networks provide. That experience invariably is very expensive to deliver… Dramatically more expensive than serving Wikipedia, for example, and the costs grow faster as the number of users increase.
So, in order to deliver the user experience that real people have demonstrated their preference for, one of the key inputs is exceptionally large (and increasing) amounts of money.
In order to be sustainable, that money has to come from somewhere. Targeted ads aren’t the only option, but no viable competitor has been discovered.
The reason why targeted ads are so productive here is because the value of the network grows non-linearly (up to a point) with each additional user. If you charge an entrance fee to each user, your network will be smaller and therefore outrageously less valuable than any network that does not charge an entrance fee.
Therefore, so long as any social network is willing to make their network free to enter, effectively no network with a comparable service can charge for usage.
Importantly, not only will the free social network be more valuable than the paid social network in relative terms, it will also be more valuable in absolute terms – both in terms of value received per user and number of users receiving value. So, it may be possible to regulate free social networks out of existence, but doing so would necessarily destroy a lot of user value in the process.
Perhaps there is some type of patronage or donation-based model that could both enable the free entry price and avoid the pitfalls of the ads-based business model. However, an organization like this is unlikely to attract the technical talent necessary to build competitive products.
Of course, like both of us mentioned, other models and competitive ecosystems exist. Centralized social networks have not killed email or instant messaging or blogs or websites. Those things are, by every measure, actually way bigger than they ever have been. So in a sense, the thing you’re hoping for is already here, it’s just not the dominant modality chosen by other people.
There are fair questions to ask about the power that ownership and control of a centralized social network imbues. Though, I think those are orthogonal concerns to the question of funding mechanisms.
In sum, I think the world you’ve expressed desire for does, indeed, already exist. It happens to co-exist with the world that you have expressed distaste for. While I am empathetic to your distaste for social media’s influence and business model, I think it’s important to recognize both why it exists and the ways in which it’s existence is not strictly zero sum.
I wonder if it is more that users are addicted to twitter or facebook than they like the services. Or if it is more a thing of everbody else is on there.
I am confident if a free simple social network had caught on among college users about the time wikipedia started then facebook or myspace never would have been commercially viable.
Almost all users prefer no ads and privacy to ads and no privacy.
I disagree that that experience needs to be expensive to deliver. Imagine a decentralised social media network hosted on its users compute devices, kind of like email used to be. Aside from large streaming videos not that much processing power or storage is required per user.
How would a free simple social network be sustained? Who would pay for the software, hardware, bandwidth, and content moderation? Asking for donations and volunteers seems unlikely to scale to the level required.
For instance, video hosting even 30 second videos becomes prohibitively expensive by basics of storage and bandwidth as soon as more than a few hundred videos are uploaded daily. Setting restrictive data retention policies may negate or standardize some of those costs but you would constantly be smashing into your financial limits with a platform like that.
Further I do not know if enough people would be willing to pay for access to a social media site for it to ever reach millions of users. Even at 1 USD a month per person people wouldn't join in unless millions were already there to make it worthwhile.
>It’s clear based on user behavior that users like the experience that centralized, for-profit social media networks provide.
I don't think this is a given, and it's not that expensive to deliver either - the problem is more with the de facto monopoly of network position. But there are lags on all these kinds of activities, and user behaviour today, is conditioned on user experience yesterday, it may not predict well user behaviour in the future. Social media is still new, the experience of having it delivered to a mobile phone barely 10 years out of the box for most of the world.
Consider a marketing team that predicted the future of the book by looking only at input from the reading habits of a kindergarten, and ignoring any correlation with how those would change as the kids got older. Note that none of this has anything to do with payment per se, and neither does today's social "advertising enabled" media.
>there were blogs/personal websites, forums, email and instant messaging.
If you're listing those as Facebook alternatives, you misunderstand the key ingredient that made Facebook get adopted by billions of non-technical mainstream users: the real name identities.
The other communications platforms of dial-up BBSs, USENET, Geocities, personal Wordpress, vBulletin/phpBB forums, AOL AIM ... do not have strong real-name identities for non-tech people to find each other. Anonymous/pseudonymous online handles and cryptic login names have too much friction of discovery and are not scalable to a billion users. (Previous comment about the real Rolodex acting as database primary key: https://news.ycombinator.com/item?id=15294086)
Facebook (possibly accidentally) bootstrapped the psychology of users creating logins with their real name in 2004 because Harvard students want to find each other on the service. In turn, as other schools were added on, other students were motivated to use their real names to interact with friends in other schools. The social graph of real names becomes so viable that old high school alumni that were "lost" can now find each other again and grandparents can easily now keep track of grandchildren's baby photos. The fake names of USENET and other forums prevent that from happening.
Emails don't work for social network growth because you can't derive an email address from a real name you know. Someone has to tell you the exact spelling of their email address. With Facebook, you can just find them by their real name (or their real phone number) and send a "friend request".
>All these options worked for expressing one's self online
The secret sauce to Facebook's dominance is its accumulation of real identities and not because it gives Facebook users a way to "express themselves". E.g. A lot of grandparents following grandchildren photos don't post anything.
Consider direction of cause & effect:
- easier evolution of platform: a bunch of users login with real names to follow/poke each other --> easy to build on that network into groups for discussion, shared calendar events, share news articles, etc
- harder to grow platform: start with discussion forum software with fake names --> impossible to build a giant sticky social network graph from that base
Facebook & Linkedin built on real names. WhatsApp growth built on real phone numbers.
Platforms built on real identities will naturally centralize because nobody has come up with a viable decentralized alternative for a universal identity database. Ideas like web-of-trust, blockchain, keybase.io, etc don't accomplish the same thing as Facebook's real_id database.
[To downvoters, if you have a better explanation of how Facebook got mass adoption by non-tech people in ways that didn't happen with USENET and vBulletin web discussion forums, please reply with your thoughts.]
It's not like facebook required one in the beginning either. You could signup with joe s and many of these profiles still exist.
The secret was the rollout (the school network connected to your edu email), and other tricks like getting you to give your login/password so hotmail contacts could be downloaded.
The journey to the top has a lot of little wins. Real names were a step added much later and connected with their ad network strategy.
>Real names is not the story. [...] The journey to the top has a lot of little wins.
I understand but my comment wasn't trying to explain Facebook competitive market value by reducing it to only to real names. To beat Friendster, beat Google+, and get to its successful IPO, one can point to many things ... e.g. the addictive NewsFeed rollout in September 2006... and free unlimited disk space for photos, photo tagging of friends, etc.
My comment to gp's quote was how Facebook as a social network works very differently from USENET/Geocities/vBulletin/AOL_AIM. One can't point to those older communications alternatives and say they work "just as well". On those platforms, there's a fundamental difference that prevents people from even finding each other.
There was the badge of exclusivity at first as well. To be one of the earliest adopters of fb you had to have a Harvard email address.
When they opened the floodgates it was a chance for normal people to associate with fancy harvard people and show how they were too cool for MySpace now.
You would be hard pressed to duplicate that kind of buzz anymore.
So why doesn't someone simple replace Facebook with an "internet phonebook" that allows you to easily lookup/publish links to personal blogs, email, forum accounts, IM, etc.?
Finding people you know on Facebook is only one aspect of their success and it's probably the most heavily network dependant one. The other thing that drove adoption was the relative simplicity of publishing content. Your Facebook profile was/is essentially a dumbed down webpage where you can post text, links and photos. You also get free hosting, free "email" in the form of messaging, and a bag of other features that you would normally have to set up yourself and pay for. They did a good enough job of providing all of the killer features of the internet for free to average, non-technical users.
>So why doesn't someone simple replace Facebook with an "internet phonebook" that allows you to easily lookup
Because it's hard to get people to enter that information.
In contrast, Harvard students in 2004 were highly motivated and that started everything off.
From 2005 to 2008, the killer feature of Facebook was entering your real information when signing up and then Facebook showing all the real people in life that were connected to you. E.g... old high school friends finding each other again. Likewise, signing up for Facebook means you also wanted to "be found" by others. It was the first mainstream service to successfully accomplish this. Myspace didn't do it. Friendster tried but their webpage performance was too slow.
>The other thing that drove adoption was the relative simplicity of publishing content.
Most Facebook users do not "publish content". Most just follow their friends or read links passed on by friends (or the algorithm). [EDIT for clarity: I mean "publish content" the way that gp I replied to cited "blogs/personal websites"]
When I used Facebook back in the day, I joined because people I already knew and talked to on a regular basis were on there. We mostly posted photos and organized events. I didn't use it to find people I used to know so I guess my experience was different.
By publish content I mean post links, photos, comments, rants, etc. Do people no longer do that on Facebook?
I don't know the overall statistics but the majority of my Facebook friends publish content at least occasionally. Mainly photos of children, pets, vacations, etc.
When you consider business is ultimately about making money in which something has to give/lose out, and social connectivity is about hanging out with people you like - you realise these things don't go together and monetising socialisation is what dooms (what used to be) a good experience in the end.
You can monetize socialization we've done it forever, clubs come to mind for instance. No one wants to pay directly for social media unlike in person experiences though so we have what we have.
That's because monetisation via serving food/alcohol sales (in moderation) can enhance the socialisation experience, so its a win-win.
Because big media got hooked on advertising revenue and news became subsidised and we got news/information cheaply, we'd been collectively taught that fact generation and dispersion costs were neglible (although they're not, work and energy is required and this was offset by the ad. revenue).
Move this to the digital space where information flows at close to light speed and you can accurately measure all sorts of marketing stuff - it just goes faster to the point of absurdity. Whatever value we can create via connectivity gets trashed by being subverted or obverted by corporate interests.
I'm now pissed off because I pay youtube $x per month not to be interrupted by ads, and now the content creators have their own ads in their content.
When you scale out, it becomes a sad picture of humanity that is excitedly self-reinforcing monetisation of things via interruption of engagement/continuity.
Actual value is evapourating.
No wonder we're all getting ADHD.
The spaces that are mostly simple and left alone are where interesting things can still continually occur. That's precisely why HN is actually still going strong.
I agree with you, I find size is a predictor of decline in quality. HN hasn't hit that threshold, but even still I know that with what growth it has had the moderation efforts to keep it this way also increased significantly.
When a tribe gets to large, because of Metcalfe's law the tribe loses the ability to discuss things to a level of nuance that is required, and that causes fractures that lead to a tribal split.
Because all of the internet communities have no natural way to split, the only thing left is for them to become toxic and eventually die.
Systems attempt to keep the tribe going longer by attempting to keep track of metrics, and this works for a while because the tribe had built up trust, but the bigger the tribe gets(with new members never trusting, forming Eternal September), the more the metrics reign, Goodhart's law states the more inevitable it is that it collapses.
> ... there seems to be only one cause behind all forms of social misery: bigness. Oversimplified as this may seem, we shall find the idea more easily acceptable if we consider that bigness, or oversize, is really much more than just a social problem. It appears to be the one and only problem permeating all creation. Whenever something is wrong, something is too big. ... And if the body of a people becomes diseased with the fever of aggression, brutality, collectivism, or massive idiocy, it is not because it has fallen victim to bad leadership or mental derangement. It is because human beings, so charming as individuals or in small aggregations, have been welded into overconcentrated social units. - Leopold Kohr, 1957
No one will pay to hang out in an empty room just because other people are there. They pay to go to clubs and bars because they provide entertainment, food, drinks, and atmosphere to go along with the social experience. If you created a rich enough social media experience then people would be fine paying for it. It's not surprising people don't want to pay for social media as it currently exists since it parasitizes all of its content from the web which is for the most part freely available. It's like fencing off a corner of a public park and trying to charge admission to enjoy the fresh air and sunshine.
That's because clubs do not interfere with the things we say to one another. Also they are not monopolies, so they can't experiment freely without risking us going to another club.
Well, yes and no. The music in clubs is often designed to be so loud as to interfere in conversation, on the premise that the less you are able to converse the more you will drink.
I saw no such influx but then again I was only into chili growing, general gardening, photography and that kind of stuff.
I'd like to discuss mental health issues as well but personally I don't feel like posting to ADHD or anything when my family or my boss or my future boss can trivially easy (two clicks, no mental gymnastics) go from my post there to my professional profile.
Besides, the term "far-right" is so utterly abused now that it doesn't mean anything except "people I don't like".
>When you consider business is ultimately about making money in which something has to give/lose out...
This is the opposite of what I learned in Econ 101. People voluntarily transact because both parties stand to benefit - otherwise one party would choose to sit out.
That's because you're talking about value, whereas I was talking about money. You can also benefit timewise but lose monetarily and vice versa...hence 'time is money'.
Money is meant to represent equivalence of value but it doesn't do this very precisely or well - which is why using it to 'measure' things causes so many problems.
I get where you're coming from, but that's Econ 101 idealism where you assume both parties are fully knowledgable about everything regarding the transaction - the real world is not like this. People also naively transact simply because they've been taught to do so.
Cafés and restaurants handle this well. I think it is because they “commoditize their complement”[1], i.e. you’re allowed to talk while you consume. (Not too loud, though.)
A restaurant exists explicitly to serve food first. If you're still there an hour after paying your bill, they'll start asking polite questions.
A café, in most places, is a place for hanging out that makes money on the assumption that people will like refreshments, snacks or even small meals while doing so. They're typically not going to ask you to leave just because you haven't gotten anything in an hour or two (unless they're full). The exception appears to be some places where coffeehouses think they're a theme park ride.
The current situation is like if 90% of bar or restaurant patrons went to one of three multinational chains all of which secretly recorded patron conversations in order to sell ads while giving away cheap food.
> I'm all for Pinterest and I think it's relatively healthier than the rest
I can't think of Pinterest as more than pollution of Google image search. They may be good, but the first impression is so bad that I don't want to go further.
I was thinking about this the other day, a decentralized social network where it's own users govern and moderate it sounds like a plausible way to make this...
dApps still on it's infancy, but it might come... Not sure if anyone will use it though, like the rest of the blockchains, lol
The big potential difference is that the early decentralized systems like Usenet and Relay treated costs and spam and paying their developers as out-of-band concerns. Whatever else you think of the newer stuff around blockchains, etc., it changes the game in just this way. If you want an online world not under the thumbs of Google/Facebook/Twitter/etc., you need a thesis about how the same takeover won't happen.
I ask because Usenet, before it became a distributed lossy datastore for media pirates, was "a decentralized social network where it's own users govern and moderate it", and it was a complete, well documented, shitshow.
What makes you think that user-moderated content won't be as toxic as algo-driven one? Look at reddit for example - the vast majority of content out there is posted to cause outrage, and people parrot it across the rest of the site.
Reddit isn't decentralized. The "moderators" play enforcers for the admins, to push their agenda and cleanse the site of any content the management doesn't want to see. Outrage is their business model, it leads to increased interactions between users (i.e. flame wars) and so they spend more time on the site. Any community or mod that doesn't play along is removed, thus in the major subs you can think of them as directly working for the company, although unpaid. The users are NOT in charge, there is a crazy amount of political censorship on reddit. In the end that leaves the echo chamber it is now. Your example demonstrates how the authoritarian style of managing social media suppresses free speech but does nothing to reign in toxicity. Facebook and Twitter are similar toxic echo chambers.
On an actually decentralized platform the radicals would still be there, but so would many other voices. Where users can't be banned for "wrongthink", communities must convince their audience with arguments or fear losing supporters. The chilling effect would be gone. A truly decentralized site would have plenty of communities the individual user doesn't agree with - just like in real life. And just like in real life users can deal with it by staying away.
Removing the_donald sub was ridiculous especially when compared to equally if not more objectionable rhetoric on main subs like /r/politics. Additionally if you want to get rival subs banned on reddit you just post and report objectionable material icognito. Reddit is a dumpster fire of juvenile , woke idiocy now and I rarelt visit anymore. That said some of the content on the_donald was just as idiotic but if you are going to allow one you have to allow the other.
I feel like I'm missing something with Pinterest. My experience of it is that it turns up annoyingly often in search engines, and has hits on pinterest.co.uk, pinterest.ie, pinterest.fr, pinterest.com etc. It's one of the sites that prompted me to install a browser addon that filters out Google hits I've blacklisted every one of their domains.
IMO there are two critical components to fixing the social media business model:
1. The platform/provider/server needs to work for the users, not for some shady third party like the advertisers. The most obvious way to do this is to have the users pay for the resources that they consume. There are alternatives, too -- users could band together to form cooperatives, or some benevolent foundation could kick in a lot of funding. (This last one is the current model for Signal; I guess we'll see how well it lasts...)
2. Key to making #1 work is that all the content needs to be end-to-end encrypted. Otherwise the platform will be too tempted to also start doing targeted advertising, in addition to charging for access.
Neither of these is all that difficult. We have all the technology right now. Mostly we just need to get people together and do it.
Who do you think is going to pay for social media?
Are they still going to pay when most people leave the service because they won't pay?
What regular person would rather pay then see ads? If you gave people tvs they would put up with the ads. If you give people free dialup internet they would put a banner across their browser.
Paying is a non-starter.
End to end encryped what? Posts/videos? Who cares..
I don't care how many people have netflix, spotify or cable when I purchase nor does it affect my experience. I watch the same movies if you have it or not.
If they charged for youtube less people will signup, less people will make content, more people will leave more content producers will leave. The values goes down.
Parents may buy space private space for photos/videos. Dropbox, icloud, google drive. But that isn't necessarily social media. Twitter, facebook or instagram would not survive on parents buying children accounts.
"How’s Spotify doing these days"
Doesn't spotify have ads? You could buy a subscription.. you also could buy a subscription to youtube.
It’s the only thing we have that can prevent the server(s) from analyzing everyone’s posts, including images, to build a tracking profile that would make an East German secret policeman blush.
One possible business model is to replace most of politicians the way Uber have replaced most of taxi company owners. Politicians are supposed to translate needs of people who elect them to bureaucrats who are supposed to bring that to reality, but they do their job very inefficiently wasting a huge amount of money.
If there was a combination of facebook, change.org and https://voteflux.org, allowing people to directly vote about issues they care, trade votes, propose laws, create local communities with custom laws, and in general control the way government spends taxes, there will be enough money saved to allow the companies providing this service to be richer than facebook without using any shady tricks.
Just to add to my other post, I like what you're getting at. I think getting money out of politics would make a huge difference. Let's make a voluntary and auditable pledge that politicians could make before they are elected where they promise never to entertain lobbyists. Politicians could take the pledge to win votes in tight election races.
Allowing people to vote on issues they care about would be a disaster. We'd have the death penalty back in a week, and abortion/immigration/drugs outlawed in a month.
Democracy doesn't work because the masses are smart and get to have their say. They work because democracies are an agreement that we can do anything in our power to change the government EXCEPT violence. Making the government more responsive to the population will result in worse government, not better.
I'm not advocating for totalitarianism. I'm just cautioning against thinking that it's the wisdom of the people that makes good democracy.
When the people making the decisions actually know about the things they're deciding on, democracy can work great. When the decisions are made by people who know nothing about the topic but are totally convinced they know everything about it no matter how completely wrong they may be and no matter how many provable facts you face them with, democracy fails utterly.
What voteflux proposes is not simply direct democracy where opinion of the people who don't care enough to not go to ballot box is simply discarded. Votes still can be delegated, by setting your vote to follow vote of the person you trust, changes require significant majority, not simply 50%+1, people can trade their votes on different issues to reach consensus, major changes can't be accepted in a week if there is significant opposition, and most importantly different regions will be able to choose different laws.
Now issues like abortion/immigration/drugs are used like carrot by politicians to herd voters one way or the other, and politician elected to as a result of support of one issue ends up voting on lots of other issues in a way that absolute majority of voters do not like.
Voteflux being a marketplace of votes, can allow people with different beliefs to find compromises with each other, will allow politicians to understand what people actually want instead of guessing based on number of angry letters/tweets, and will allow people to organize in a way to have different laws in different places allowing society to experiment with different things instead of fighting life and death battle each election.
Why would it be violent? Change.org is already able to influence politicians, if it was updated to have the required features, more politicians would have to follow it, and new politicians who believe in direct democracy would be able to get elected.
So it would be a turbulent time, but no violent revolution.
You've got to be kidding. Change.org has only ever influenced minor, inconsequential issues. Signing online petitions allows people to feel good about themselves without actually doing anything.
Hahahahah! Pure freakin' GENIUS this is! Needs to happen. We'll just replace politicians outright.
————————————————————————————————————————
So, I'm totally okay with this comment gettin' downvoted, but can ya at least like … I dunno … add to the conversation maybe?
Perhaps a small comment on why his idea isn't pure genius? Or why I shouldn't find it amusing? Or how about why replacing politicians with a better system could somehow be a bad thing? Downvote me if you like, but downvotes without context don't help me (or anyone else) understand anything you might be trying to get across.
————————————————————————————————————————
Just for fun, I'll propose another idea; Let's start drafting people into political office.
"Congratulations {random citizen}! You've just been elected President!"
And just like a regular wartime military draft, refusal to do the job to the best of your ability = jail time.
I've always wanted to see that happen. When television came along, governments funded public broadcasting channels. Now that social media has come along...why couldn't there also be a publicly-funded social media service?
I think we're at a shifting point where people would be willing to pay for social media vs seeing ads and/or having their data being sold. I recently launched Glue[1] in my hope to have a social network that respects user privacy, offers an ad-free option and selfishly scratches the itch for features I wanted.
I ran my previous startup (Twitpic) completely on ads and it wasn't an enjoyable experience from the business side. It also does not (usually) align with the user's best interest. Ad business models requires attention to feed it, which in turn requires social media companies to build features to get as many eye balls for as long as they can on their app.
I also don't think paid only is the answer either for most cases. Some are ok with seeing ads in exchange to not have to pay to use. I'm curious to see if there is a balance that can be struck with services offering both options.
Brave's new search engine[2] is another example saying they will offer a paid option in the future. I used their beta and it's solid. I'd be willing to pay for it to be free of Google and help sustain it.
I hope you're right about people being ready to pay. And I think you are. There's a huge un-tapped market just in the US, of people who have mostly or totally checked out of the existing platforms.
Glue looks cool. How does this compare to something like Mastodon?
I'm working on something related, called Circles [1]. It builds on Matrix for decentralization and E2E encryption. We're also in beta, hoping to launch later this month.
Hear hear! I deleted my Facebook 2007'ish and never joined Instagram, Twitter is only traditional social media site I use, so I can attest to those checking out (or never joining). Even with Twitter, I have to make an effort to not "doom" scroll before bed.
Circles looks cool as decentralized and E2E are fascinating to me. Glue doesn't have a whole lot in common with Mastadon, but similar microblogging features I believe.
Hit me up sometime, would like to hear more about your project. - noah@ark.fm
The business models are the problem. It's far too lucrative to do what the big social media companies are doing, which is to exploit the social behavior of their users in return for hard advertising cash. Users are of course willing victims here but if you step back a little, social media is mostly a very low tech business of connecting people with each other via "feeds" of information. As long as they "engage" with it, basically you are printing money.
There is no incentive to fix that because it isn't broken for the likes of Facebook. Of course for the rest of us there is a huge incentive. But it raises the question of how. How is it going to work, and how are you going to convince people to use it. The first part has lots of answers in the form of social networks without a lot of users. So, the latter part is the problem.
As for authenticity and integrity; some notion of using cryptographic signatures could work. It's not particularly hard. News especially should only be coming from authentic sources. It's such a low tech solution to just sign your work and stake your ruputation. But somehow that's not a thing. Instead, our social media feeds are full of crap from dubious sources. Because click bait works and sells clicks.
That points to a solution. The likes of Facebook should start authenticating sources of information and start accounting for reputability. They are obviously incentivized to instead serve you clickbait. So the solution is to incentivize them otherwise. Hold them accountable. They help spread misinformation and profit from it. That could have consequences. When it does, Facebook will adapt.
https://keybase.io/ had a pretty neat thing goin' where you basically claim your online identity to the world at large through publicly available cryptographic signing proofs that anyone can easily verify through their website or desktop app. If sources of news or science or other important public information would cryptographically prove their information came from them in a way that was easily understood by the general public then it'd hella harder to spread disinformation without it bein' tied right back to it's source or instantly dismissable due to lack of provable origin details.
Item 6 in the article specifically addresses business models. Scott Galloway advocates for subscription-based models (I feel he's misguided). And the question was the subject of an earlier article on the summit, mentioned and linked within the article:
Besides CPUs that get faster and more efficient every year, phones that get smaller and last longer, global interconnected networks, cars that don't burn fossil fuels, ...
I think regulation is the answer that aligns with the business model. All businesses are forced to align in the same way, and no company can benefit from breaking the rules.
If you fine companies for every day foreign bot accounts use the platform to spread misinformation, they'll staff up on countermeasures.
If you pass laws against amplifying fake news and spreading negative sentiment, there will be machine learning investments like we've never seen before. Comments that have low confidence scores won't be shared broadly.
If you regulate these companies like common carriers, they'll respect free speech on all sides of the aisle as long as it isn't law breaking.
Not sure why you're downvoted. I disagree on implementation but it definitely needs some form of regulation. All other solutions call for massive overnight changes to human nature which isn't gonna happen.
“There’s always been this division between your right to speak and your right to have a megaphone that reaches hundreds of millions of people,” - that is the argument totalitarians always use. It's the same as you have the right to free speech, but not the right to print it, or you can print it, but you can't distribute it on the street corner, or talk about it in an assembly of over a certain number of people. In every instance it's about whomever has control attempting to limit the expression of an idea that they happen not to agree with. Totalitarians will always use an example of where speech somehow created harm to exhibit the need for control, the truth is that freedom does cause harm, and will always cause harm, because you cannot eliminate harm completely no matter the system. We have plenty of historical examples of the attempts to eliminate harm causing exactly the opposite.
>> It's the same as you have the right to free speech, but not the right to print it, or you can print it, but you can't distribute it on the street corner, or talk about it in an assembly of over a certain number of people.
No printing press is obligated to mass-print a book you write. Similarly, you cannot go to a corporation's campus and distribute stuff unless you have their permission — because it is their private property. Just because private parties are refusing to give you a platform does not mean they are totalitarians.
>> Totalitarians will always use an example of where speech somehow created harm to exhibit the need for control, the truth is that freedom does cause harm, and will always cause harm, because you cannot eliminate harm completely no matter the system.
This is a strawman in the shape of an Argument from Futility. It's no different than saying "why have safety systems in cars when you cannot eliminate car accident deaths completely?" It's a strawman because nobody is aiming to completely eliminate harm from online speech, just prevent it in egregious situations where it can be prevented. It's an argument from futility because harm reduction is still a valid goal even if complete elimination of harm is not possible or feasible.
> No printing press is obligated to mass-print a book you write.
This is off-topic, I feel. The article illustrates that there is a push to stop "printing press" from printing "your book" in cases when it's perfectly happy to do that. In the name of fighting misinformation, of course.
You have the right to free speech, the right to print it (at Kinkos, even if a regular printer won't touch it), the right to distribute it on a street corner, but not the right to force people to take it, and not the right to force a bookstore to sell it.
Used to always think just shining a big light was all that was needed to overcome falsehoods and manipulation but turns out that’s not true. How naïve of me. Turns out various philosophers realised this hundreds of years ago.
Anyway, it doesn’t matter, i can finally say I’ve found my people now, @thebirdsarentreal :-) (its a satirical take on belief systems that vehemently denies it’s satirical which is just chef’s kiss)
Way back in the stumbleupon days I stumbled onto the flat earth society message boards. It was clear to me that everyone involved knew it was satire; just a tongue-in-cheek dig at the way any conspiracy theory, no matter how outrageous, could be made plausible if you throw enough crazy at it.
Fast forward 10+ years and people actually believe it. Someone recently launched themselves in a homemade rocket to prove with their own eyes the flatness of the earth (and died for their efforts).
I love birdsarentreal, but part of me isn't looking forward to finding out what happens to it as more people come across it.
Indeed; Umberto Eco dramatized this phenomenon in Foucault's Pendulum (even before the related Poe's Law was formulated).
> People [...] sense instinctively that the [...] truths [...] don’t go together, that [the promoter is] not being logical, that [they're] not speaking in good faith. But they’ve been told that God is mysterious, unfathomable, so to them incoherence is the closest thing to God.
This tangles up with the appeal of being an "initiate", "on the inside", and knowing the secret truth, and the combination is powerful.
The reflexive satire and irony is one of the problems of social-media age.
Satire and irony used to be effective. In last 20-years cynical distance and ironic posturing have become so prevalent that it is no no longer considered subversive. Citizens can consume outrage passively through various satirical media products, displacing outrage and abstaining from more active forms of resistance.
People consume satirical content, and think they are above it all, while just participating in cynical ha-ha.
“There’s always been this division between your right to speak and your right to have a megaphone that reaches hundreds of millions of people,” she said.
One is the freedom of speech, the other is freedom of the press, we as humans have the right to both. Yet never before in history we have had access to the press power of the social media revolution. For a time, the gatekeepers took a hands-off approach, but that time is over.
End Internet censorship now. We need platforms where the battle of ideas can be fought and won on a level playing field without the creeping hand of censorship in the name of "combating disinformation" getting in the way.
Fact check all you want, may you win the battle of ideas. But talk of censorship, even from MIT, stinks of totalitarianism.
First, let's explore this a little. Let's say it were possible to lie to a billion people -- a lie that could really and truly end lives. Hypothetically....would you stand back and let this happen?
Because that's the problem with the absolutist position against "internet censorship." It denies, ever, the possibility of a harmful kind of speech which should in fact be acted against. People can list lots of examples. (Doxxing. Calls for violence against individuals. Child pornography. Dangerous medical misinformation. Copyrighted information.) Are we really okay with a billion people getting all of these things -- rather than one person being "censored"?
And the other problem is we're not talking about a government with a constitution... We're talking about Instagram and Pinterest (and other social media companies). I've heard it said that they very conveniently claimed "We're adopting the same absolutist free speech principles of a country" mostly as a ruse to keep from having to invest in any kind of monitoring of their services.
Lies don't end lives, people do. You can't say that a lie "caused" someone to do something. It may have influenced them, but they are ultimately responsible for their actions. More generally, I don't think we should be in the business of policing second or third order effects. Responsibility ultimately ends with the perpetrator of an action. If someone lies to one (or many) people, and some of those people go on to murder: punish the murderers, move on with life.
> Doxxing
Not sure what's wrong with this TBH - publishing public information shouldn't be a crime. Some sites have anti-doxxing policies, some don't.
> Calls for violence against individuals
Direct threats of violence are already illegal. People that make them should be arrested.
> Child pornography
Already illegal - find and imprison the pornographers. We already do rather a good job at this.
> Dangerous medical misinformation
In addition to the obvious idea that people are responsible for their own worldviews, any process to distinguish "misinformation" from "information" requires an oracle that everyone can agree to trust. We have no such oracle.
> Copyrighted information.
Reproducing copyrighted material is already illegal and pretty well enforced.
> mostly as a ruse to keep from having to invest in any kind of monitoring of their services.
Why do you think it's OK to impose these costs onto businesses? Why should companies who effectively provide a digital bulletin board be responsible for policing its contents? Why not have... you know... the police be responsible for policing? Then you can clearly and directly send any complaints about their effectiveness to your local representatives, who will actually be empowered to do something about them.
We totally agree that many of these things are, indeed, illegal.
But if that's the case, then at some point a society is also going to need to consider how it's going to also address the distributing of those things which are illegal. (If something can still really be disseminated to a billion people -- then what was the point of even making it illegal in the first place?)
For decades and decades all publishers have been responsible for the content they publish. (See libel laws, just for example.) So it just seems irresponsible to now concede the existence of vast unpatrolled online empires making billions of dollars while simultaneously creating dangerous (and illegal) situations which others will then need to police for them.
> We totally agree that many of these things are, indeed, illegal.
Good! Then we don't need any new laws or policy changes. We just need to empower law enforcement to do their jobs. As I've mentioned, they seem pretty empowered already.
Publishers indeed have been responsible for content. Sites like Instagram or Pinterest are hardly publishers however, they're more akin to the bulletin boards that used to be ubiquitous in public places (With Pinterest it's literally in the name). Anyone can post whatever they want without any editorial process. If someone posted a murder contract on such a bulletin board, would the board owner (say your local grocery store) be responsible? Hardly.
Vast unpatrolled online empires where people can say whatever they want sounds pretty good to me actually. It's freedom of speech in action. Anything actually illegal or really dangerous is policed pretty well (say compared to TOR, and even there criminals are not immune). Otherwise just leave people be. No one "creates a dangerous situation" just by saying something online. It takes more than that.
I would agree with you if humans were good at fact checking and then changing their minds based on those facts. But humans do not work that way and never will. Even you yourself don't work that way because you won't check to see if I'm lying, and if you do and see that I am not and that the research soundly supports my position, you won't change your mind.
> We need platforms where the battle of ideas can be fought and won on a level playing field without the creeping hand of censorship in the name of "combating disinformation" getting in the way.
In a world where the Russians have a professional and very active disinformation organization, your "level playing field" looks more like a military conducting a massacre against a civilian population...
I think it is probably not possible to have both “true information” and “control of information” simultaneously.
I suspect that this is a false choice, akin to “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”.
If we may observe history, we see that societies where one is not free to speak do not have more “true beliefs”, and indeed believe many things we think of as nuts.
History trivia: the "essential Liberty" Franklin was talking about was not any personal or individual liberty. It was the liberty of the government to tax property. The "purchase" he was talking about was literally buying things with money.
The Pennsylvania Assembly wanted to raise money to defend the frontier during the French and Indian war by taxing land. The Governor kept vetoing this at the behest of the Penn family, which owned a lot of land that would be hit by any such tax, each time finding some objection that Franklin and the Assembly found made little sense.
Franklin wrote to the governor arguing that the Assembly should not have to give up its liberty to exercise taxing power over the Penn family lands in order to raise money for defense.
Yes, the common usage of the quote is more lyrical than Franklin's usage. I didn't quote him specifically because it's sort of a misquote used in his sense.
Without the appeal to authority of citing Franklin though, it's not a quote that holds as much water when inspected on its own merits.
Trading liberty for safety is the crux of social contract theory, so the absolutist interpretation of the quote is clearly devoid of worth. We can instead focus on "essential" and "temporary," and then perhaps it has value but we must then argue about the meaning of those words.
... Is the freedom to shout falsehoods with Facebook's megaphone without Facebook weighing in "essential?" How "temporary" is the safety we gain from Facebook tossing fact-checks on posts?
There is so much emphasis on fake news but I think any story can be shaped by _which_ facts are discussed. Face it, you can't put a complete nuanced view in the sound bite world of social media.
At the same time, I think people enjoy being outraged. You get an actual physical energy burst, and you can think how superior you are to the misguided idiots you are talking about.
I don't think social media is tricking us into this behavior, it is just helping us do it better. It would be good if we could figure out how to discourage the behavior in the first place.
The first note they provide is just about Burning Galileo at the stake. This strategy does not work and the only correct way is to inform the public. Education is the ONLY solution and this indoctrination path will only result in war.
Generally, producing true and accurate information takes more effort than producing misinformation, and on the consumption side understanding a refutation of something generally takes more time than understanding the original thing.
This leads to an inherent advantage for the producers of misinformation.
You approach worked say 50 years ago, when most of us had limited sources of information and so had time to actually see and understand the refutations of most attempts at misinformation.
Nowadays, we've almost all got more sources of information competing for our attention than we have time to deal with. It is much less likely we'll see the answers to the misinformation, or have time to deal with them. Furthermore, with what we see algorithmically determined to maximize engagement, we are probably going to be shown more misinformation from the same or related sources instead of the refutation.
Indeed! That was a different Italian heliocentrist, Giordano Bruno. (He has an excellent statue in Rome on the Campo de Fiori, un-subtly facing the Vatican).
You've identified a number of real things here, but you've also swallowed a few whoppers. "CRT" was originally something that a few scholars discussed in university settings. Now it is mostly a non-existent right-wing bogeyman that YouTube weirdos hype for clicks. No one is going to teach your little daughters and sons that they are inherently evil. (Although, isn't that the Christian doctrine of Original Sin?) However, some of their better teachers might teach some history that you didn't learn in school.
"Indicrat" is certainly one of those "YouTube weirdos" referenced above. The lecture audience seen in that clip were old white women, not young white children. If those women hadn't heard about racism before then they should have.
Also, obviously the woman in the form-fitting attire was using intentionally provocative language. She doesn't really believe in demons. She was even giggling during part of the clip. There is nothing wrong with provocative language, unless one is a cancel-culture moron with easily hurt feelings.
You're refusing to just accept that CRT is in fact embraced, advocated, and shaping discourse and attitudes among people. This example did not spring up out of the blue. Your dismissal of it as a right-wing fiction is either naive, or dishonest.
If that were the case, CRT Chickens Little would be able to come up with some evidence of it. Actual evidence, not the amusing video linked above. Which precious white children have run crying from elementary school?
To be clear: I'm sure that (adult) white Americans have been informed of their racism. I'm sure that 1950s lynchings and other uncomfortably recent events have been taught in some history courses. I'm sure that racist aspects of various contemporary institutions have been discussed. I'm sure that you have been very uncomfortable with all of that.
All of those are good things. Stop hiding behind hypothetical traumatized children.
So you agree that all white people are racist, and carry responsibility for lynchings in the 1950s? Even recent immigrants who have no relation to the perpetrators of said lynchings? We're supposed to be uncomfortable too, just because we bear a passing resemblance to some assholes? Isn't that a bit... racist?
That appears to be about a parochial high school in NYC. It's my impression that we're supposed to be very concerned about "young children" in public elementary schools in the heartland. There's really no accounting for the shenanigans these weird churches will get up to.
I don't consider myself responsible for lynchings that occurred before my birth. I feel like I have to take a bit of blame for e.g. Michael Brown's murder, since I've voted in Missouri for years. But if someone wants to criticize me for both of them, it won't break my heart. Occasionally humans face discomfort.
Acting like these are small isolated events and don't represent a wider shift in educational policy just doesn't seem sincere. This article identifies some of the very same practices of using race to determine whether students fall into oppressor or oppressed classes. Excerpts:
> She alleges that teachers and students are required to participate in racially segregated antiracist exercises and that teachers are required to teach material depicting white people as inherently racist oppressors
> ...a biracial high-school student who claims he received a failing grade in a required “Sociology of Change” class because he declined to complete an assignment that required students to identify their gender, racial and religious identities to determine whether they qualified as oppressors
This is racism.
> I feel like I have to take a bit of blame for e.g. Michael Brown's murder, since I've voted in Missouri for years.
If you feel like taking some of the blame off of Darren Wilson's shoulders, I can't stop you. I would however object to any placing of blame onto other Missouri residents who didn't kill anybody. It seems like this would allude to a disagreement about whether responsibility for action is fundamentally collective or individual.
In any case, it's a total strawman in the context of this argument. We could waste our time trying to reconcile two extremist identities that we don't align with, or we could get back on topic and try to find a mutually amicable solution to this problem.
And people wonder why nothing bipartisan ever gets done...
No, you're missing the point completely. You can't address something by pretending it is inconsequential or worse doesn't really exist (ie. gaslighting). This narrative that far-left extremist views are only right-wing paranoia is dishonest and short-circuits any bipartisan debate about how to proceed.
The person above even defending this particular example saying it was colorful rhetoric and not serious. Would he have done that if it had been said about Jews or Blacks? Until we can have honest discussion with integrity instead of partisanship, nothing will ever get done.
My belief is that we should take the extremists on our side of the debate to task. The left should do everything they can to diffuse the far-left... call out unreasonable positions of the left. And the right should do the same against the far-right. It would go a long way to making the center more inhabitable and productive.
The lecturer only said that white Americans are racist. That is certainly true. (I.e., we certainly are racist.) What analogous description would you like to suggest "about Jews or Blacks"? You will successfully continue the racism that blacks face in USA as long as you can convince idiots that it's "extreme" to acknowledge racism. Unfortunately for you, that type of idiot is growing more rare over time.
Wow, dude. Lots of projection here. I've never called anyone a "Nazi". CRT critiques the racism of the mainstream, so it's definitely not mainstream itself. That may be why some people are so afraid of it, because they desperately love the mainstream status quo in which their fear and hatred for BIPoCs is reflected in powerful institutions. I'm not a "proponent" of CRT; it's just obviously correct to anyone whose view isn't obstructed by his colon. Seeing a CRT person behind every tree is like last year when thousands of mom-and-pop convenience stores were supposedly burnt down by white antifas, of whom there are probably about 200 nationwide.
You're 100% correct and the people down voting should think about what happens if they lose control of the censors and censorship starts being run by right-wing zealots instead of the far left.
Sad seeing the parent comment getting flagged. Censorship by flagging a very real problem on HN, but from what I can tell its mainly the users abusing it not the mod team who are generally well-balanced.
Yeah, I don't think of a fair moderation system as censorship. Groups of people should have the freedom to keep a discussion focused on whatever they desire.
Still, it's disappointing just how easy it is for us to be completely blind to things that go against our own preferences, even when they're blindingly obvious to others.
I agree it's so easy not to see our own biases reflected in our own censorship actions. We all have blind spots, but censoring parts of a discussion simply because you don't like them isn't healthy.
Funny thing is … Hacker News discussions and a certain subset of channels on Telegram and Discord are probably among the most actively actually social media I've managed to find online.
This seems completely wrong to me. It's just more of the same nonsense. Social media platforms are not to be trusted as arbiters of truth. That isn't their job and if they are "held accountable" for disinformation they will simply double down on censuring any speech that deviates from main stream orthodoxy. What we need is not a crack down on lies but a return of institutions we can trust.
This entirely misses the point IMO. Social media is the most powerful propaganda/idea pushing machine in the history of the world and it ain't even close.
Lies, half truths, and everything in between are presented in the most riveting ways possible and our monkey brains just can't help but soak up every last bit of it. If a person believes everything on social media then __they__ should be fixed, not social media.
A great solution is for people to see social media for what it is: 99% garbage that gives you a quick dopamine hit.
People should be made aware that on social media you are likely:
1. Reading misinformation
2. Viewing content that's created with the sole purpose to manipulate you into a certain view (Often times by a bad actor)
3. Reading opinions by trusted people who've been compromised by the above two points
While not perfect, I believe if people assumed everything on social media is false and work from there then people would be a lot happier.
> A great solution is for people to see social media for what it is
Just like a great solution to the climate emergency is to stop releasing greenhouse gases, a great solution to war is peace, and a great solution to starvation is to eat food.
There are also studies that say reading the news makes people unhappy. It's not hard to imagine why that is.
I think most of us agree that social media as it is, is broken. But internet forums and social media can also be a massively powerful tool for the people to connect and share information from a grassroots level.
Just like journalism has become compromised, so has social media. The solution isn't to abolish both, instead we need to find ways to strengthen their integrity and make them independent of partisan corporate or political interests.
The solution to misinformation from without cannot be the tyrannical coercion toward state-sponsored misinformation from within. Free speech and a free society have always been vulnerable and difficult because bad people can come in and say bad things to stupid people and make bad things happen. This is not a new problem. But throwing out that freedom is not a solution. You will not like the results.
No, you're the one suggesting we abandon "Western ideals of information freedom", so you need to propose what ideals we should have instead, and what practical changes those new ideals will lead to.
One big problem that the report doesn't (I think) cover is that the current social media venues encourage competition - it's not a discussion and understanding amongst friends or family (or god forbid strangers), it's a competition for likes, follows, and reposts.
Even HN suffers from this - but far less than other places. Poor replies to popular comments will receive more views, there are few detailed conversations, just people sharing their view and leaving, opinions from often very knowledgeable people can get lost in hundreds/thousands of other comments.
Very low on details and high on platitudes you read on various blogs. Since it is relatively high level, the only one I can kinda agree with is regulation. Right now it is still a wild west company town.
I thought from the title that this would be about actually changing social media - business models, different UX patterns to get rid of dark patterns and antisocial patterns - actual interesting ways to fix social media. Instead its more of the same old "we need more censorship online" rhetoric. It's a shame, there are actually interesting discussions to be had on the impacts of social media and the engineering of it, but it seems most people are not actually interested in solving the real problems.
Even if you don't show the numbers, won't more controversial content still be promoted more because the company behind the network still knows the numbers and wants to increase engagement because they need to sell advertisements?
I think it’s important to acknowledge that a) you’re absolutely right and government contracts are a brilliant extreme example of this but also b) regulations are precisely the things that allow markets to exist at scale.
We often talk about regulations in terms of volume, but the actual details matter.
Social media as it is now ("connect everyone together") is definitely a bad idea. Communities of people only work when they're close knit or share an identity, not when you have mash-ups of different people in different communities all randomly tied together and forced to address each other's opinions. You might as well have a barbeque where Hell's Angels, an LGBTQ support group, and a Catholic church study group all sit in a circle for no reason.
Online groups only work when they have a uniform identity, an ingroup, a particular set of shared values. They can have other ideas or values, but it needs to be clear to them that they should only bring up that group's values. Communities can also be toxic and stupid, but they do keep cohesion when they're all aligned on a core theme.
I still remember when I was laughed out of my Masters class on Mass Communication for saying that giving everyone a voice might result in a net-negative. I’m not sure we’re there yet, but that’s where we’re headed.
All of these efforts are being pushed under the guise of making sure people hear "the truth." The Real Truth. As long as that is their emphasis, it makes it very difficult to argue against any draconian measures people want to put in place. Because, why would you be against any measures that help people hear The Real Truth? That means you would be Anti-Truth, and nobody wants someone who is Anti-Truth.
Are enough people big enough suckers to allow this to happen under these false promises? We will find out. If you do support measures around controlling what people see and say, at least do the rest of us the courtesy and don't pretend that it will bring about a utopia of free exchange.
Bingo. Water flows downhill, systems tend toward entropy, and businesses do whatever makes the most money. If those actions (like child labor) are unacceptable to society, laws have to be made to restrain them, or they will keep happening.
In general this reads like a sickening melange of half-true diagnoses (yes, of course lack of competition is a problem) and naive appeals to the nanny state that don't understand what they're inviting. To whit:
"Renée Diresta, research manager at the Stanford Internet Observatory, said policy should also differentiate between free speech and free reach. The right to free speech doesn’t extend to a right to have that speech amplified by algorithms."
Yes citizen, you can say whatever you want. You just have to say it in this sealed, soundproof room with no windows! This is almost literally the author saying "You have the right to speak, you just don't have the right for anyone to hear you", which is nonsensical. Free speech is meaningless if you can't actually reach anyone with it. The right to yell your deepest beliefs in the middle of the Sahara desert isn't free speech.
But of course, this ability to silence "misinformation" is fine, because it will only be used against the bad people! Surely it will never be used to silence people like the author, because they're good and right! Never mind that this is already happening (Youtube is famous for silencing certain LGBT and sex-positive figures, as well as any number of left-wing activists, for example).
To me it seems the spread of misinformation probably is a self-fixing problem. The users have no choice but to develop critical thinking about what they read/watch so they probably will and this is good. I actually was glad deepfakes emerged as this can make the fact you can't just believe everything you see more vivid and obvious even to those who were ignorant.
my criticism on some of the potential solutions addressed in the article (number = number in the article):
1. stop the spread of fake new: This has already been happening, most of big offenders are gone (e.g Alex Jones) but what should be noted is that this can (and has) lead to backlash. The fake news ive seen always comes from tiny accounts that have unexpectedly viral content usually taken and reworded from banned sources (Alex Jones et al), I don't think cracking down on the most prolific offenders will necessarily be the fix. The actual report is way clearer and thorough on this then the article.
3. Lack of regulation for social media companies: I feel like if their is global regulation (which there should be) many countries will just ban it (e.g China, Turkey and Russia) and still lead to the same balkanization.
5. polarizing Algorithms: I don't think slowing online interactions will solve this. If someone wants to be racist they will be racist. I think this will just bring annoyance.
6. better social media business models: they say that they worry that "the best, fact-checked information is available only behind a paywall" but that is already the case!
I recommend people go to the actual report, the 25 solutions is on page 16
The tactic was highly successful on SA, but it can only do so much. Having a cover charge keeps out lazy spammers, children, and casual vandals/trolls pretty well. Unfortunately, it doesn't do anything to keep out those who are determined to be disruptive and have anything resembling real income. Even with 5 bans a month, $50 is trivial to any adult with a job if they really, really want to stay and cause havoc.
More importantly it's nothing to billion dollar corporations or governments. Money should not equal speech. Instead I think that firstly we need a "proof of personhood", a way to identify real individual users and distinguish them from bots and shills that run multiple accounts. As it is now the system can be easily gamed which works great for those who have plenty of resources. Genuine users get the short end of the stick.
Social media requires that the people you are interested in are there for it to have any value. Not enough of my friends are going to be willing to join facebook for >0 usd, which means I wouldn't be willing to join it either.
Twitter can maybe be saved here, but they would a lot smaller.
Sadly 10 usd/year isn't enough to keep out the riff raff, in fact it would probably entice people to see it as an investment meaning we would get ever more crap from influencers, "thought" leaders, etc.
The following will never happen (or be corrupted at inception) because social media is a tool of the capital and political classes for spreading their propaganda:
> 2) Focus on and shut down prolific disinformation networks
> 4) Use accuracy nudges to crowdsource falsity labels so that algorithms can be trained to automatically identify lies.
> 21) Tie consequences of legal violations directly to corporate executives, not just their companies.
> 22) Strengthen regulations by adding taxes, such as programmatic media taxes, to deter algorithmic amplification.
> 23) Ensure that users consent to data use; protect data privacy.
> 24) Distinguish between speech and reach--the right to speech and the right to amplification of that speech.
You’d be surprised how easy it is to detect lies. Ask most people (and no doubt the authors of this are included) and they’ll tell you “they’re told by my political opponents and those who disagree with me”.
Two very different groups hate social media for two very different reasons.
Ordinary internet users hate it because it represents a serious loss of autonomy for them, and a degradation of service. It makes the internet less free than it used to be.
The wealthy and powerful (and those who love brown-nosing them) hate social media because it's STILL TOO FREE. There's way, way too much information there that threatens their vice-grip on the world, and it has to go. NOW.
This might sound rude and doesn't add much to the conversation, but this issue can't be fixed unless you "fix" people.
We can be eloquent, humanitarian, calm and polite, but the reality is that we can also be cruel, vindictive little animals that seek out to ruin everyone's day. There is no fixing social media because our nature doesn't allow it, and you can fix the business aspect, but not how the users utilize it, unless you restrict them to the point we can't call it a social media platform anymore.
TL:DR Fixing human nature isn't going to happen. The more popular a platform is, the more positive and negative voices you'll have in the platform, and we all know that we linger and give more attention to the negative more so than the positive.
I'm surprised the none of the things I want are mentioned. here's (some) of my ideas:
1. know who you talk to
if you want to stay anonymous, use an anonymous platform. HOWEVER, that guy you entered a twitter shitstorm with? might as well be some 12 year old, broken english, russian troll. that chick who messaged you? 56 year old chinese gypsy.
2. mark content type
sarcasm is (imo) very important. but in the wrong context, lack of context, it can be horribly confused.
3. exact timestamp
I don't want to see a post is from "few months ago" (hidden, in a small text similar to the background color). I want posts to be color coded by age & type.
4. exact definitions
my language isn't the same as your language. even if we both speak English, there are different meanings to different cultures, regions, etc. if something is ambiguous or unknown, I'd like to get a dictionary definition as reference.
5. I don't care about likes. or views. I want to see views/likes ratio. if a post has 1000 likes is it good? maybe, unless it has 1m impressions. or it's exceptional if it has 2000 impressions. that's a far better measure for quality
and there's more if I just keep thinking about it. probably a lot more
1: i can't imagine anyone trying harder at this than facebook is?
2: marking content as sarcasm completely undermines it's impact. 'sarcasm tags' are unutilized due to lack of purpose, not lack of visibility. how does facebook's 'react emoji' functionality not cover this usecase better?
3: you don't want the human-readable age, you want the exact absolute date, but also the age, which you want to be exposed as color..? absolute date is generally easily grabbed with js, but hidden to reduce clutter.
4: a product to put cross-culture communication up front would be an interesting prospect, but i've not seen anything like that, and i'd expect a full team, many revisions, and a patient and involved community to get functionality and ui approaching helpful...
5: you don't want the exact views and like, you want a human-readable measure for quality that you just came up with and can easily do in your head? (vs how you feel about predigested data re #3)
For a new social media to be "fixed", the company behind it needs to have the incentives and business model to steer towards more healthy behavior. None of these cover that.
We've had plenty of social media startups try to fix social, like Path, but ultimately they've failed bc a "fixed social media model" almost seems antithetical to "a good business."
I'm all for Pinterest and I think it's relatively healthier than the rest, but they barely have a functional business...