Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intimacy does not scale (2021) (archive.ph)
193 points by dredmorbius on Oct 1, 2023 | hide | past | favorite | 145 comments


I made a similar observation some time ago, so let me add this: human interactions in general don't scale. Social media easily turns completely normal human interactions into something perverse.

Eg, say you've just exited to the cinema and vent to your friend that this movie sucked. What were they thinking? Who had the brilliant idea to change directors mid trilogy? There's a bit of venting, but quickly things settle down. There's a bit of bonding over the shared experience. Nobody is hurt, you're just having a private talk with your friend.

Take that to Twitter, use the same words, and now the same thing is an addition to the torrent of hate aimed at the movie's director. You might have meant to reach your 10 followers, but search, recommendations, hashtags, retweets, etc can easily pull your statement out of its intended context.

Then there's what it must be like to end up on the other side of that. This happens organically, people aren't organizing around it for the most part. So the torrent of hate can regain its strength at random, wherever some person posts about it 3 years later, it makes #2 in some countdown on Youtube, comes up on the top of AskReddit, etc. In real life most things settle down, the Internet allows rehashing old stuff effectively forever.

And I think human interaction breaks at those scales. A single person can't meaningfully interact with a million random people which mostly spontaneously decided to flood you with messages. You can ask a prominent one or two to leave you alone, but more can show up at any time.


One issue with social media not scaling is that ~5% of the population is insane.

In day to day real life, you don't notice 95% of the time, and when you do, you avoid that person in your life going forward.

Online, the 5% insane are over-represented, and a constant presence in any social media interaction beyond a few eyeballs.


I have a feeling that it is more like 99% of people are insane 5% of time. Or if not insane, at least forgetting the principle of charity. And given that a typical social media text does not contain all nuances, someone is interpreting the text in an absurd way almost guaranteed. And downhill from there it goes.


95% of the time I try to be empathetic and patient and open minded when discussing online. 5% I’m in a bad mood, or tired, or sick, or whatever and I don’t try as hard. On those days, I should disengage. But, I’m addicted.

It makes me think though of all those people out there who don’t have a cushy life like many of us and suffer daily. I’m sure they’re in a bad mood a lot of the time, and the internet is their main form of entertainment.


There's both. There's the "I have persistent trouble coping with what we generally accept as 'reality' in a healthy, functional fashion" and "I've just had a moment of completely stupid/delusional/reactionary thinking". (The previous is completely my own wording and not intended to be precise.)

I 100% agree with the overall idea that we aren't able to deal well with the consequences of what we've created with social media.


I’m sure there’s quite a bit of variety here but having crazy ideas in one area seems to correlate with having crazy ideas in another area.

Separating separating fact from fiction is a skill and people don’t all end up making the same number of mistakes. Similarly, people who enjoy trolling flat earth ideas are going to enjoy tweaking people with other bizarre ideas.


The second you decide you're not prone to being irrational is the same second you give up any hope of policing your own rationality.

It's actually pretty common for extremely intelligent, methodical scientists in one field to have somewhat bananas thoughts/opinions in unrelated fields.


Nobody is going to be completely correct across the full breath of human knowledge. However, people can be wrong without their ideas being crazy.

If I was going to pick my most controversial idea it would be something like, humanity was like a ring species going back thousands of years with people somewhat regularly crossing the Baring Sea without realizing anything particularly unusual was going on. That’s a somewhat wacky idea because diseases didn’t make the jump etc. But it’s wildly more reasonable than ancient aliens.

So as long as we are talking crazy rather than simply incorrect, crazy isn’t evenly distributed.


With all due respect, the narrative being spun here positions the social network as an entity as an impartial, disinterested actor that is simply facilitating conversations: this is not the case. All mainstream social media websites are (operated by) for-profit entities. They are selling access to human attention to advertisers to generate profit. Because of this, they are motivated to increase the amount of attention they have to sell; keep people in their app. And they have found via reliable experimentation by numerous social scientists that the most effective way is to keep users in a near-constant state of some combination of rage, indignation or disgust.

This poisons the entire well. I don't think the issue is that ~5% of the population is insane. I think the issue is social media has every reason in the world to find that 5% and give them the largest megaphone possible, up to and including TikTok's model of outright paying them for their insanity. The fact that the agent mediating all this intimacy as put by the author is inclined to do this, to shove the insane person in front of as many people as possible because they will readily cause reactions and further engagement with their product, makes it impossible to ignore them, nor to ignore the subsequent consequences of them being shoved in front of so many other people.

I've said it before and I'll say it again: social media cannot be a profit-driven industry. It simply cannot. The incentives for what makes a good, healthy, useful social media apparatus and the incentives that make it profitable to industry are two circles, with MAYBE 2% overlap in the middle on a good day.


Two things: First, does the problems still happen in places that aren't financially motivated like a hobby forum some guy hosts or on volunteer Mastodon servers?

And don't internet forums broadly speaking also have problems even if they don't have feeds engineered for engagement? For example 4chan is just chronological sorting.


> First, does the problems still happen in places that aren't financially motivated like a hobby forum some guy hosts or on volunteer Mastodon servers?

I can't speak for Mastodon as I've never used it, but I was an avid user for a long time of many forums and boards on many topics back in the old Internet. And while we certainly had our share of flamewars about shit related to the topics, nobody was ever building a following invoking the tenets of White Supremacy on any of them?

> And don't internet forums broadly speaking also have problems even if they don't have feeds engineered for engagement? For example 4chan is just chronological sorting.

I mean, 4chan is a large board that doesn't have feeds, sure, but it does have different boards for different things. Most of them are benign nonsense and weeb culture stuff. What most people think is 4chan is actually 4chan's /b/ subforum. And while voting systems certainly increase engagement, I don't think they're required at all, especially when the larger community has already defined a culture of... well, whatever you'd like to call 4chan. It's certainly interesting whatever word you want to use.


> I can't speak for Mastodon as I've never used it, but I was an avid user for a long time of many forums and boards on many topics back in the old Internet. And while we certainly had our share of flamewars about shit related to the topics, nobody was ever building a following invoking the tenets of White Supremacy on any of them?

Ok, but that may be just because they were over at Stormfront, which seems to have existed since 1996.

Although I guess not having to share a platform and having undesired interactions with some groups is preferable.


they have found via reliable experimentation by numerous social scientists that the most effective way is to keep users in a near-constant state of some combination of rage, indignation or disgust

this is only true for social media that promotes particular content, which means it is not true for signal, telegram, whatsapp, wechat and many others that just share messages as the users intended.


I've never used WeChat, but are signal/telegram/Whatsapp considered "social media" now? I thought they belonged to a separate category of "messaging apps", which consist of people messaging others that they already possibly know from a different channel.


fair point. this is a matter of definition. i agree with the distinction, but this is not shared by everyone. see: https://en.wikipedia.org/wiki/Social_media so i find it useful to point it out.

case in point, in this wikipedia article wechat is clearly categorized as social media, even though it's messaging component is no different than whatsapp, telegram or signal, which are "maybe" social media.

in summary, some people call them social media, and some don't.


This is not the only use of those apps. There are plenty of large channels that are either broadcasting from a central source of a sort of forum sorted by new of people who don't know each other but share some type of interest, specially in countries and regions where those apps are dominant (which I believe is not the case of the US).


As the other commenter said, I think most people would call those more messaging apps than social media (though wechat blurs the line even further being a sort of everything app but I digress). That said it's also worth considering that few users of any of these products use only one; therefore, the network effects of say, Facebook steering people into more extreme content, will reflect back when they join more extreme Telegram channels, with it being at least decently plausible that further interaction on Telegram will steer them into yet further Facebook Groups or into the influence of content creators elsewhere still.

Enough of these utilities engage in this attention-grabbing funnel tactic that the effects of their algorithms far exceed the boundaries of their own products, and reinforce each other as well.


network effects of say, Facebook steering people into more extreme content, will reflect back when they join more extreme Telegram channels, with it being at least decently plausible that further interaction on Telegram will steer them into yet further Facebook Groups or into the influence of content creators elsewhere still.

the distinction is where the steering comes from. on facebook the steering is affected by the algorithm, which decided what messages you see, while on telegram it is not.

on facebook, twitter, youtube etc, i can be confronted with extreme content that i didn't ask for.

on telegram, whatsapp, wechat, etc, i only get content that i explicitly asked for.


Oh yeah, 100% agree and they're much less toxic for it. But messaging apps like telegram because of the lack of reach of their groups, also makes them more difficult to moderate which is worth noting too. The people who believe whatever extremism is on offer are not likely to report those channels, and even if they were, Telegram could just remove the reporting user, say "we'll take care of it" and do nothing because the likelihood of followup is basically nil.


why would groups have to be moderated from the outside? if their content is not public because you can't find it unless you explicitly join the group does it matter if the conversation is happening on telegram or in a room somewhere in the city or in a private forum somewhere else?


I mean, yes? A non-exhaustive list of things found to be happening in private telegram channels we probably want to stop would include: coordination of harassment campaigns, a plot to kidnap a sitting United States governor, a large part of the January 6th event in the United States, a shit ton of hate speech, doxxing, confidence scams, distribution of material to facilitate genocide...


that's not moderation. that is surveillance.

we can argue how much surveillance is acceptable or necessary. but that is a different topic.

the purpose of moderation is to avoid discussions being sidetracked by extreme messages or bad faith replies or other noise, so we can focus on the topic at hand.

but whatever happens in these groups does not affect the discussion in the groups i am joining, so their moderation is quite irrelevant to me and provides me no benefit.

this is different on public forums like hackernews where i can access all content and expect to have a civil discussion everywhere. hence moderation here is necessary and welcome.

but if telegram were to come in and try to police speech in my private group then i'd be very upset and look for a different service with more privacy.

what you are effectively advocating for is that such private groups should not be allowed to exist because there is no way to control what is being shared in those groups.

yes, we want to stop these things, but blaming the tools for allowing them to happen is the wrong approach. either we allow private conversations or we don't. if we don't, then noone can have private conversations anywhere. that is not something you should want.


I always described this as "The old time concept of the 'Village Idiot' still exists and now they have a way to discuss their nonsense with other likeminded people which gives them a bigger voice"

Completely agree with what you're saying


While that ~5% may be insane, it also appears that around 30% are hopelessly gullible, and herded into voting blocks by power hungry manipulators.

I think this post is almost there: what does not scale is humanity, our current constitution of largely selfish, immature adults operating as our leaders. In order for our civilization to scale, we need more real adults, and not these immature adult-babies running things.


Alas they don't always get voted in, or even get involved in modern day politics.

That just leaves the attention seekers, and people with a burning agenda.

Saying that, what would make a good national politician and leader is probably quite a rare combination of traits.

Good understanding of politics (natch), law, history, economics and statistics (as well as possibly industry, marketing, PR, the money markets), and have a degree of personal charisma.

If the healthy ongoing survival of democracy needs that, we'd better get better at producing these people. England (sorry Scotland, but there is a bit of a gap there) and the US try to solve this with their OXBRIDGE and Ivy League Universities - but recently the results have not been so good. Maybe the upcoming younger generation can surprise us all?


How can we expect people with nuance to win elections if the population has been trained to expect solutions that fit in 280 characters and/or 15 seconds of video?


You dont.

It ends up like the clip on futurama, where Jack Johnson vs John jackson both end up talking about the 3c titanium tax going too far, or not too far enough.


Well put. I think the 5% is a combo of "low IQ psychopaths", narcissist and just assholes ("bad people"). The word insane IMO points perhaps to much to inability for rational judgment, while a certain % of 5% is very much aware of their impact. The most dangerous among the bunch are obviously high IQ psychopaths...


Anyone who has worked a retail job, sat on a condo board, worked in a medical office, etc .. dealt with general public, basically understands this principle.

Someone will inevitable nitpick my particular choice of word, but "insane" is the general non-clinical description of the category.

Points to high value in making your kids work a bad retail job in high school to build some character & experience this first hand early.


Indeed. This [0] compilation from Parks & Rec is obviously an exaggaration, but if you've ever spent any time with the public as a representative of something, you'll recognise most of these people.

[0] https://www.youtube.com/watch?v=areUGfOHkMA


These people really deserve to interface with an LLM!


These people will end up training the LLMs. And there's nothing you can do about it.


i suspect this is one of driving fears many people have with LLMs.


I'd met an old friend who runs a high street business, walking into their shop and chatting briefly before a random walked in off the street and said random stuff, then walked out.

My friend shrugged, turned to me, and said, "Welcome to retail".

I've related that to a few people over the years. It's a useful lesson to keep in mind. Much better to recognise this for what it is than attempt to fix, fight, or rationalise it.


> Much better to recognise this for what it is than attempt to fix, fight, or rationalise it.

This is a morally and emotionally sophisticated position, and though I agree with it, it's a bitter pill.


With an ever increasing amount of cameras and AI, I would expect that fighting it will start to look like facial identification blocking people from entering businesses for their past transgressions, online or off.


What I'd meant was attempting intervention in the one-on-one case. It's simply not productive, you're better off generally either letting the wave roll over you, or making a quiet exit.

The mass-surveillance / access denial approach is a different tactic. One that has some advantages, but also very clear disadvantages and inequities.


Points to high value in making your kids work a bad retail job in high school to build some character & experience this first hand early.

It also points that banning access to social networks is a right call until kids get to a (mental/educational) stage where they can judge these things in a similar manner.


You thinking like 30s? 60s? Maybe once humans turn 100 they're emotionally capable of being normal on the internet?


For reasons, I had to learn about actual insanity. 5% is spot on (filed under believe but cannot prove). The kicker is when someone is high functioning, as in can pass for sane, in most contexts. But then unpredictably acts totally out of bounds, sometimes in very scary ways.

Start thinking about insanity and you'll get pulled down the sinkhole of "what is sane?", "am I insane?", "would I know if I wasn't?"

IMHO, the takeaway point isn't "insane" or "sane". It's more like "Do I have a theory of mind for the person(s) I'm interacing with?" and "How do I have compassion for people who are suffering in ways that I don't understand?"

YMMV.


When you add bots, it's way more than 5%.


Only 5%? While most of us in the US been drinking lead filled water for the last 70 years.


I'd say it might be closer to 30% of the population that is unhinged at some level.


And to add another case, you step out of the cinema and make some rather tasteless remark about the lead actor.

Your relatively well-chosen circle of friends reacts to that negatively and you concede that you were probably being unfair.

The online equivalent of that is more like a splitting of universes. Half of the comment's blast radius impacts a group of people who much like your friends finds it distasteful, but unlike your friends has very little other context about you with which to build an image. They launch an attack against this new simulacrum.

The other half finds, amongst the near-infinite crowd of people you're not hanging out with right now (many of which wouldn't have passed some other social filter of yours), some number of people who agree with and amplify your off-the-cuff nasty take.


Also, in your example, if you start this remark, you will immediately get subtle non-verbal feedback, often before you have even ended your first sentence. If you see a few friends cringe, twist their mouth, or start with "errr...", you quickly realize that your opinion is unfair, strange, or has no majority. You have a chance to pull out before too much (or any) damage is done, often by simply adding an attenuating end to your sentence or by changing to tone of your voice to make it clear that you are overly cynical / sarcastic. On social media, you have usually published your opinion in several multi-sentence posts, without any non-verbal context at all, before any feedback is received.

Human conversation is a delicate dance to a fine, multi-dimensional tune. In comparison, online conversation seems like shadow boxing.


The trio of direct-ancestor messages I am replying to is a great summary.


So, are only opinions which have a majority worth voicing these days, or even worse, should every opinion that does not conform with majority be kept to oneself? Do you know what you are implying with this?


My choice of the word "majority" was unfortunate, but I cannot edit the comment anymore. Remove "majority", leave "unfair" and "strange".

Case in point: if we had this conversation in real life, you would've cringed after I used the word "majority", and I would've immediately realized and corrected my poor choice of words. Without this feedback, I suspect that you were now under the impression that I am for the suppression of minority opinions. Which would've been the clickbait headline of the news coverage of this conversation if I were a minor celebrity, and this was Twitter/X.


Frankly, if you are a (minor) celebrity, public interactions and the drive to be known for something is an essential part of your trade. I totally lack empathy for these people getting some pushback. Do we now have to have pity for attention-hungry individuals which receive a substantial amount of their income from being public? After all, they exposed themselves to feedback deliberately. I really dont have to love everyone, and if they barf their opinion onto my face, I feel entitled to tell 'em that I think they are full of ....


That's not a charitable read at all of the thread.

Unpopular ideas are essential, but the tight feedback loop which the former physical constraints offered was still a useful component. It acts as a counterbalance to our tendency to let ideas and thoughts that are poorly developed, more damaging than they are useful, or just logically/factually flawed consume too much attention.

I know when I pay attention I notice that I can think a lot of ridiculous things, and it would take some gymnastics to arrive at the belief that every dumb little prejudiced / biased / flawed / incomplete thought is worth promoting to a global audience.

Sometimes an idea will have merit despite initially receiving bad feedback in the context where it was first voiced. That feedback is still helpful, though, and may prompt some refinement before it goes any further. Many ideas that don't have much merit die here, and that's a good thing.

The alternative is broadcasting it raw straight into the forever archives to be fought over and promoted based, at least partly, on how outrage inducing or divisive it might be, and it's my belief that we're worse off for this becoming more of a norm.


Not parent commenter, but I believe everything social should be kept to real life. The dynamics of online social media have almost nothing to do with real human interactions. It is a waste of time and a harmful superstimulus substitute, like empty calory fast food for real nutrition.


Keeping personal matters offline might be a reasonable ideal but a highly unreasonable and unfair practice. Many people do conduct significant social actions online.

There are people who have no local access to a similar or understanding peer group. We often hear of this now in terms of sexual identity or preferences, but it could be anything from interests to skills or aptitudes to medical or psychological conditions. People go online to find community, especially community that's not represented locally.

(This, like The Force, has both a light and dark side, of course.)

There are also people who are distant from friends, family, or other community, and for whom online group interactions are among the few available options.

We read and hear now of the closely guarded and coded language that was used to refer to situations and circumstances in Victorian times. Slangs and argots arose to be able to communicate within an anti-society whilst excluding normies. People today may use similar methods (though tools for tracking slang, such as Urban Dictionary, tend to catch up quickly).

Technical means may help, as can anonymous or pseudonymous identities, though both these have their own serious limitations as I've described in other comments on this thread.


the problem is that online discussion comes with higher risks, and one needs to be aware of those risks. but many aren't.

your points are of course good, but there exists private online groups where these risks are lower so especially friends and family are a non issue as there is no problem to have a private online conversation.

finding your community online is more difficult, but the point is not that you should avoid online groups, but that you need to be more careful how you communicate in online groups. you can't just hop in and spill your personal feelings without being aware of how those messages will be received. you want to get to know people first, and that takes more effort and time online than in person. it depends on what kind of people are in the group, and also if the group is public or private.

hackernews is public but most people are reasonable here and bad faith messages are not tolerated, so for a public group it is a pretty safe one, unlike twitter where you risk having your messages promoted to people with an unhelpful attitude.


Among my points are that intimacy and scale are inherently at odds, and that human psychology prevents the public at scale of registering this. If there's a solution, it's going to be in the design, description (and marketing), operation, and regulation of those systems. This is a classic case of "personal responsibility" being a trope and cover for dodging corporate and engineering responsibility.

Evidence of this comes from the level and scale of information breaches: the US DoD, Department of State, Department of Justice, multiple states' attorneys general offices, the Russian Kremlin and military establishment generally (I have strong though as yet unsubstantiated belief that a key factor in Ukraine's success has been a near-total compromise of Russian communications channels). These are entities with a strong incentive to and capability for ensuring secure comms and data management ... and yet ... they're failing. The refugee family or abused mother or whistleblower ... stands little chance.

One of the criticisms of HN is that people are occasionally attacked for their expressed viewpoints, occasionally on-forum (though that's usually swiftly dealt with), more often off. I've seen some well-known and high-karma leaderboard profiles callously call for the death of entire groups of people. And there are sites which do kibbitz on "Orange Site" as they tend to call it, often criticizing its moderation or behaviours, but also conducting just the types of abuse you're describing.

HN also lacks some of the specific protections you describe. There's no private or limited spaces, direct messages, or similar mechanisms, by intent and design. There is the option of throwaway pseudonymous accounts, however, which helps somewhat.


> Do you know what you are implying with this?

Ironically you're replicating the phenomenon being described: taking something out of context, exaggerating it to the Nth degree, racing down the slippery slope, and using this to imply that OP is Hitler.


> human interactions in general don't scale

The reason why human interactions do in fact scale socially is because in the real world there are consequences for bad behaviour. People carry resentments but restrain themselves because of the risk to their benefits in participating in society.

In the fiction that is social-media business, there are no consequences. Petty resentment trumps social principles.


Social media in its current incarnation has a foundation of perverse incentives, so its outcomes will of course be perverse.

IMO social media has massive potential to be a truly positive force in our existence, if only the positive incentives could be embedded into its fabric.


That's mostly describing context collapse, yes?

<https://en.wikipedia.org/wiki/Context_collapse>


It's also worth noting that even with in-person interaction, when you've got a party of more than 4 people, even sitting at the same table together, the group typically splits into smaller groups with their own individual conversations.

Any in-person situation where a "group conversation" is meant to scale (ex: town hall meeting or class room), there is always a person in charge of leading and moderating the conversation (hand raising, mic queuing etc.).


>> Take that to Twitter, use the same words, and now the same thing is an addition to the torrent of hate aimed at the movie's director.

Thats because of the commercialization of social media and making posts public. If you comment in your private FB feed it will most likely be a minimal response. It's still not the same as talking to the person you just saw it with, but it stays within a closer circle.


But then it's not really social media anymore, and closer to good old communication through telephony.

The point is that 1 person broadcasting to many doesn't scale, unless you are prepared to deal with the mess.


Doesn't the movie example provide a counterpoint? Some human interactions do scale - townspeakers, specifically do on Twitter, on Instagram. 1-1 discussions don't.

Is the point more that social media is mixing various human interactions into the same bucket of "online discourse", and mixing the two (1-1's vs town halls) makes it "something perverse"? It's like accidentally handing your friend a mic during your walk back from the movies.


> Who had the brilliant idea to change directors mid trilogy?

Whoever it was hopefully didn't also annihilate the all other properties under their purview in the last 2 years.


People have to understand that posting to Twitter is the equivalent of shouting in a megaphone at Time Square. It's not the equivalent of sharing something with your close friends, even if some people might think so.

Yes most of the time your Twitts will only be noticed by your close friends. But they're still public speech.


One useful analytic trick I've developed is of inverting propositions: "human interactions in general don't scale" gives us "how is it that human interactions can and do scale?"

Because one frame of history is to look at it as the story of precisely that scaling: family units to clans, clans to tribes, tribes to warlords or kingdoms, kingdoms to empires, the emergence of democratic republics (or of communist states, if you subscribe to an alternate arc), increasing scales of militaries, of religions, of commerce, of academia, etc., etc.

Each of these can be considered as networks, and there are some well-known models of network value.

The naive Metcalfe model states that network value grows with the square of the nodes.

A superior alternative was proposed by Ben Tilly (btilly at HN) and Andrew Odlyzko, which is that the value grows as n * log(n), that is, additional nodes add value, but comparatively less over time.

I've extended that to include a constant cost function, that is, one that's uniform for the network (at least on average), though it may change over time:

  V = n * log(n) - k * n
This means that the value of the network grows so long as k n is less than n * log(n). Put another way, k constrains the maximum size of the network.

Your network will grow if you can reduce k and keep it small.

One way this manifests is as hygiene factors. Through the mid-19th century, the maximum size of a city was limited by its reproductive rate less its death rate plus net in-migration. As deaths typically outpaced live births (from disease and accidents, generally), cities needed to sustain large in-migration simply in order to maintain constant size. And the limit for 19th-century London was about 1 million people, roughly the size of ancient Rome at its peak. Deaths from epidemics such as cholera would claim tens of thousands of victims per year.

The solution was public health and hygiene, particularly the establishment of both clean fresh-water supplies, and of removing sewage waste. That latter was somewhat accidental: sewers built to drain away storm water were connected to by individual households as those installed flush toilets. The sewerage and storm water were drained away far to the east at the Thames Estuary.

Further advances came with food purity regulations, pasteurisation of milk, improved preservation and canning, refrigeration, increased bathing and handwashing. In the case of New York City (which saw a similar set of improvements) 85% of the reduction in mortality from 1850 to 2015 had occurred by 1920, which is before the introduction of modern antibiotics, most vaccines, organ transplants, medical imaging, and cancer treatments.[1][2] I'd first seen this pointed out by Laurie Garrett in the 1990s.

The further increases in the late 20th century largely come not from medical technology but rather access to medical care, and show far more strongly in under-served population (minorities generally, Black women, and especially Black men) than among the White population.[3] More recently Robert J. Gordon made this a major point of his analysis in The Rise and Fall of American Growth, noting an almost complete halt to medical progress (as measured by outcomes) beginning in the 1960s / 70s. (2015).

Put another way: one of the key reasons a site such as Facebook can grow to 5 billion MAU is because it tamps down hard on the systemic costs imposed by each additional member. In particular I've noted a general progression of decline as sites scale, with consistent patterns at roughly order-of-magnitude scales: 10, 100, 1k, 10k, 100k, 1m, and beyond. Networks have a number of characteristics: scale or size, topology (e.g., point-to-point, star, tree, mesh, hybrid), speed (the rate at which actions or messages propagate), etc. It's far easier to maintain control over a star (broadcast) network than a mesh (fully p2p) network, as the central node is the sole originator of content, and mediates all interactions. As I've describe elsewhere in this thread, hierarchical, modular or divisional structures tend to scale far better than monolithic ones. They do of course have their own limitations.

The key takeaway though is that scaling interactive networks, of any type, is largely a cost-minimisation function.

________________________________

Notes:

1. Graphically illustrated in "The Conquest of Pestilence in New York City" <https://1.bp.blogspot.com/-uTWEATUzgxk/TXQoTibILtI/AAAAAAAAA...> <https://economicspsychologypolicy.blogspot.com/2011/03/conqu...>

2. A surprising number of anti-cancer chemotherapy treatments can be traced to the chemical warfare compounds of World War I. <https://medicine.yale.edu/ycci/clinicaltrials/learnmore/trad...>

3. See: <https://www.usnews.com/news/blogs/data-mine/2015/01/05/black...> and "The gap between blacks and whites was seven years in 1990. By 2014, the most recent year on record, it had shrunk to 3.4 years, the smallest in history, with life expectancy at 75.6 years for blacks and 79 years for whites." <https://www.nytimes.com/2016/05/09/health/blacks-see-gains-i...>


Great post. I wouldn't be so defeatist about technological solutions though. I can think of a few technologies that we already have which have allowed intimate, important and meaningful discussions to scale (imperfectly) in our past.

1. Parliamentary proceducers. This is a tool that allows debate to scale up to a few hundred/thousand people. It provides rules for how topics can be chosen for discussion, altered, debated and how to reach consensus during a discussion. It's a social technology that is not easy to apply, not obvious, but used by dozens of countries and I've no clue how many large organizations.

2. The judicial and academic systems. Again, not flawless, but these ones show that we can have asynchronous debates over long periods of time where people reference and use old cases and studies.

There may be other mechanisms which humans have used to structure conversation, facilitate debate and allow for important expression. All of these require training and only work when participants agree to certain rules. To my knowledge none of these apply to online media. But, who knows, maybe it will take a few hundred years for us to discover some social technology that does. The examples I bring were not obvious to society and took similar timescales to develop.


Both 1 and 2 provide good layout for important and meaningful, but not intimate discussions. Most social conversations are not necessarily obliged towards a resolution of a problem as in these two cases. In intimate discussions it's most of the time good enough to be heard.


Those are two excellent examples, as they're certainly technical, but not based on advanced information technologies, which is a common bias in HN discussions.

I've done some informal looking into governmental structures, with one key entity often being some sort of central cabinet, central committee, politburo, or privy council, which can be found in a wide range of governmental systems: representative democracies, parliamentary, Communist, and monarchical. In almost all cases, that group tends to be on the order of 5--9 or so individuals. This would mean that, say, the US Cabinet (28 members) would have a much smaller core group, with the most powerful positions generally being considered as the Secretaries of State, Treasury, and Defence. Along with the National Security Advisor and perhaps the Attorney General, this would give a core counsel of six members, including the President.

Parliaments virtually always operate in terms of both committees and parties, with each of these being further subdivided. US House and Senate committees still tend to be large, roughly 25--50 members. Taking the US Senate Appropriations Subcommittee for Defence (one of twelve Appropriations subcommittes), there are 17 members, divided amongst the majority (9) and minority (8) parties. Taken by party, this gets us to my five-to-nine member core working group. There's typically further breakout by ad hoc working groups, and as I understand there are unelected staff (that is, subcommittee participants who are not themselves elected members of Congress) as well.

There's little actual work which occurs in full sessions of a legislature. Votes (based on Committee work), speeches (largely for public consumption, and often to an empty chamber), ceremonial proceedings (swearing in, State of the Union), and with exceptions such as impeachment proceedings or the largely-but-not-entirely-it-seems perfunctory functions of certifying national election results. What parliamentary procedures do provide for is smooth function of subdivisions of full legislatures (e.g., committees and subcommittees), as well as rules for overall operation of the largely pro forma full-chamber sessions.

That said, procedures are key in those functions.

The Judicial system is interesting in that courts often operate highly autonomous, with individual judges having extreme discretion over the operations of their courtrooms, and courts collectively of dockets.

A US trial court consists of a single judge. If held before a jury, there are usually 12 jurors in civil and criminal cases (fewer in some state courts), though jurors don't actively participate in the trial proceedings themselves, but rather observe the prosecution / plaintiff and defence, as well as judge's orders and rulings. After completion of the trial phase, the jury members deliberate amongst themselves to reach a verdict. Witnesses are fully independent (and are often excluded from hearing one anothers' testimony), legal teams are also typically small in all but the very largest cases.

Other countries have various different systems, often with civil or criminal cases being tried before a panel of judges (frequently three). US appeals trials operate similarly, though in significant cases an en banc hearing with all the judges in the district hearing the case (six to twelve, I believe, depending on the circuit). Appeals courts and Supreme courts, whether at the state, Federal Circuit, or Supreme Court levels are the only overview of individual court judges and judgements in the US. Which is to say, again, that the structure is highly independent and autonomous and based on small groups.

Academia is another interesting situation, again with relatively high autonomy amongst tenured faculty, though that's been decreasing. Academia is divided into disciplines, universities, colleges, and departments. Typical department sizes again tend to fall into the 15--50 full-time permanent members (there may be many more teaching assistants, lecturers, and other part-time or contingent positions).

You didn't mention business organisation, though that would be another useful model. I believe the terms are M-Type and D-Type for monolithic and divisional corporate structures, respectively, with the latter emerging at firms such as DuPont and General Motors (particularly under Alfred P. Sloan, Jr., at GM), or terms to that effect. Again, the goal is to reduce dependencies between organisational divisions whilst also achieving efficiencies of scale.

(Charles Perrow devoted much of his academic life to studying organisations, and wrote survey text on the topic, Complex Organizations, which discusses much of the prior work in the field.)

All of which again generally show that communications scales poorly. Where it does scale, it almost always does so by hugely simplifying the content communicated. Markets, where prices substitute for a whole host of other qualities, would be an example of this. (That prices communicate complex qualitative differences and nuances poorly is another well-known challenge. Complexity will out.) Information automation does expand processing capability, but that tends to end up in a Red Queen's race, and often seems to consume free budget in additional processing, effectively a form of economic rent, rather than delivering improved profits, a/k/a the Solow Productivity Paradox.


Talking about intimacy of discussions happening in a public forum is a weird contradiction of terms.

Like in real life - if you were giving a conference presentation you wouldn't have any expectations of intimacy.

I don't know why people would communicate in public spaces if they don't want to interact with the public. Isn't that the entire point?


That's an incomplete understanding of social norms.

> in real life - if you were giving a conference presentation you wouldn't have any expectations of intimacy.

Of course you would. Every conference I've ever been to had a bunch of social norms attached to it, encoded implicitly or explicitly in the conference materials and attendees' behavior. People were at the conference to socialize on those terms. People who violated those terms would quickly find themselves excluded. Even those who were there to broadcast information as widely as possible still had a lot of expectations attached to how that would happen. At the best-run conferences you never notice any of this happening, because the people running it are good at their job - and the result is a feeling of intimacy that builds trust among the attendees.

Online forums carry all the same features.


This is why sometimes the idea that “free speech” is the best thing is incorrect. Moderation in moderation is actually the best model. Once you stop moderating you get a slide in behavior that introduces noise, derailing or worse. Moderation correctly done is not stifling free speech, but it also doesn’t allow for egregious excesses that overpowers other voices and opinions.


and what you get when free speech is less limited you can see on 4chan: https://news.ycombinator.com/item?id=37729001


Because our own human nature works against us in this case. Electronic medium acts as a (intimate) safe space because it's your own device where you share your mind, usually at an intimate safety of your home (or at least most of the time a familiar location). There just isn't that slightly unbearable feeling of 10000 pairs of eyes staring at you attentively for every word you write before you hit 'send' button.


I've wondered who might have had a similar experience prior to widespread social media.

I suspect broadcasters and columnists for newspapers and magazines might have --- they would flip out a few hundred or thousand words of copy, or make a statement live on air, and start hearing back from people. In the broadcast case the response time might be comparable to the online experience --- seconds to hours or days. In print a piece might not hit the streets for days or weeks, giving a delayed hit.

But there's still the experience of writing up something in what's putatively a private space (office or home), without the immediate sense of ten thousand eyeballs, but knowing intellectually that that actually is the case.

(Do I dare hit "reply" now? ...)


The unique thing about social media is the symmetry of the experience. The broadcaster sits behind a desk. He may get written complaints fed back to him. He does not have to go home and then see a thousand someone elses behind a desk on TV criticizing him.


I'd draw the asymmetry differently.

Public performers, whether authors, actors, artists, etc., not reading reviews of their own work is so commonplace as to be a trope, and it long predates the Internet.

Rather, the asymmetry is that whilst in a broadcast-mode world the performer is a public character and faces the audience's response, the audience does not in turn have a similar experience. Very rarely a single individual might be made an example of ridicule, but my sense is that that was rare.

Today, the spotlight can, and does, turn on anyone at virtually any time. And it can be incredibly discomfitting when that happens. Moreover text, or audio, or images, or video, can suddenly be published around the world. Jon Ronson's explored the contemporary experience in his 2015 book So You've Been Publicly Shamed, describes several incidents.

(At least one of these ... was featured fairly extensively on HN.)

But that's the distinction I was trying to draw. Prior to, say, the late 1990s to early aughts, it was people in broadcasting, mass media, entertainment, and a few in high-profile political or business careers who might draw mass attention with frequency. Now that effect can strike virtually anyone online (and in some cases off) with little warning or reason, and expose them instantly to mass judgement by strangers.

It's not that the broadcaster could go home to avoid the onslaught (and in many cases they couldn't). It's that the many others weren't similarly vulnerable.


Without presence in a physical public space people tend to behave in maladaptive ways, they don't really internalize that they're speaking publicly or even to real people. A lot of stuff said on the internet would not be said if someone had to say it directly to someone's face.

I've actually taken that up as a guideline, before I hit send on anything I do ask myself if I'd say that to someone sitting across the table.


> I don't know why people would communicate in public spaces if they don't want to interact with the public. Isn't that the entire point?

I broadly agree with you, but, a minor counterpoint: Depending on the social network in question, setting the privacy of a reply-chain is either tricky or impossible (RIP Google+, too good for this world...). If Person A posts something intended-as-public and Person B wants to respond to it, it is unreasonable to expect the average-Joe Internet-user to start up an appropriately-privacy-scoped discussion thread for it. The "in public" aspect of their discussion might not be by choice, or may not even be something that is in their mind at the time they make the reply. "They should just learn to use the tool better" is not a helpful observation - a tool that is routinely misused is, by definition, a bad tool.


One of the pains of the Fediverse (where the original incident occurred, and ones quite similar to it do frequently), is that the posting-scope options are:

- Public: everyone[1] has access to the toot, and it's listed on public timelines.

- Unlisted: everyone has access to the toot, but it is not listed on public timelines, and hashtags if used aren't ordinarily discoverable.[2]

- Followers only (FO): only accounts which follow your own can view the toot.

- DM: only accounts named in the toot, and yourself, can view the toot. Oh, and admins of the originating and any recipient instances as well.

The problem is that the followers-only scope is set for each toot within a thread. I'll often face the question of responding to another person's FO toot, and doing so myself FO, which means that my set of followers can see what I'm saying and potentially glean context, or as a DM. I'll often reply DM so that only the recipient sees my response, particularly if there's any sensitive information involve.

On platforms which have a post-and-thread mechanism, such as here on HN, but moreso sites such as the late Google+, Diaspora*, and from my understanding, Facebook, responses are already scoped by the top-level parent's visibility. (On HN that's pretty boring: posts are either visible globally or not at all, it's more interesting elsewhere.) Microblogging systems such as Birdsite and the Fediverse ... don't have that, so context collapse within FO threads is far more likely.

Google+ also had the notion of "Circles" ... clunky, but usable, which were defined by the poster and which could be the target audience of a given post, and (in time) Communities, which were shared groups, public or private, open-admission or moderator-approved, to which third parties could subscribe themselves. (Similar to subreddits, generally.) Both did provide ways of defining scope and context of discussions, with some reasonable bounds on privacy.

There's also the challenge that multiple participants within such a thread may not be, and in all likelihood are not mutually visible to one another, so that the view of all participants but the thread parent author is likely to be partial, with fragments of the discussion open, and broken threadlets appearing and disappearing as followers and non-followers comment.

Then there's the whole notion of setting my privacy scope by who has chosen to follow me, which is to say, an action I've virtually no control over. The Fediverse has an additional setting which can somewhat manage this by requiring specific authorisation of follow requests, but that itself is clunky and cumbersome. It seems to me to be far more sensible to define a group either by explicitly joining one, or by defining one myself, and setting the posting scope to that group of my explicit choice or definition.

The upshot for all of this remains largely the same: using the Fediverse for highly-personal communications is probably a poor choice. But, see my earlier reply on thread,[3] people's sense of "public" vs. "private" online seems to be quite poorly tuned and not easily addressed by any means, technical or otherwise.

________________________________

Notes:

1. Excluding various block mechanisms which are ... complex in their own way.

2. The exclusion of hashtag visibility from Unlisted toots strikes me as among the more profound design/architecture failures of Mastodon. It's quite often that I don't necessarily want public-timeline exposure but would like hashtags used to be visible to those with a specific interest in them. Recent changes (v. 10.4) with some really useful search improvements (as in, it exists at all, but also specific filters and criteria) make this much less painful.

3. <https://news.ycombinator.com/item?id=37733680>


> I don't know why people would communicate in public spaces if they don't want to interact with the public. Isn't that the entire point?

These 'social networks' the link speaks to may rely on the public to function, but that is just an implementation detail, not the point. The point is to submit your not-fully-formed thoughts into a piece of software and have it come back with details you have overlooked so that you can come to better understand the topic.

However, the software it speaks to becomes buggy at scale, going off on random tangents that have nothing to do with anything, failing to stay true to speaking to details you have overlooked. It proposes that there is no way to scale that software to avoid such tangents because the implementation is ultimately, and fundamentally, flawed.


In real life it is usually fairly easy to determine when you're addressing a single individual, a closely-held group, or a large gathering. And large gatherings themselves are still generally constrained: it's difficult to directly address more than about 50 people without some sort of public-address system in use.

Online the level of exposure and amplification are difficult to judge. Your direct environment may be entirely private --- alone in a room, with a screen and keyboard (of various descriptions). Or you might well be out in public, but the immediate public and the public you're reaching online are themselves almost completely independent of one another. One of the more common complaints or statements I see is to the effect that other, uninvited people are barging into a private or limited discussion. But at the same time, that discussion is occurring in a highly public space.

There are a few interpetations of that:

- It could be a "simple matter of training", of educating people to understand that what is online is inherently public. See xkcd's 10,000 for the scale that's involved, and multiply that by roughly a factor of 80 to reach global scale. That's a lot of painful confusion endured every single day.

- People holding this position could well be disingenuous, know full well that they're monopolising a public discourse, and are seeking simply to exclude conflicting voices and viewpoints. I'm told that such things happen....

- The principles might simply be wired too deeply into our psychology. We have an inherent sense of rooms and spaces and friends and groups, and think that we're having discussions amongst them even when the reality is that we are not, and much as optical illusions and legerdemain still work and fool us even when we know the trick, there's no engineering around this.

I'm divided among all three of these viewpoints, though I put strong weight on the third. And there may well be others I'm not considering.

But the point remains that attempting to have intimate discussions at scale really presents fundamental and insurmountable contradictions that can't easily be resolved.

(Original author / submitter.)


I like this take. I think we have to realize what these mass social medias are and manage our social interactions on them. I think social media should fall into two categories - the mass medias where you get to interact with the rest of the world and the private medias where you can have only your close friends.


Anonymity plays a key role here.


Speaking as both the original author and someone who's been studiously pseudonymous online for well over a decade (after several decades of generally-public disclosure): anonymity and pseudonymity are exceedingly challenging.

I know I've left trails, and that if this were something my life absolutely depended on I'd probably not be writing this now. There are any number of ways to determine who a person is, or even to narrow down the probable set of individuals, often with only the thinnest of data. Given the prevalence of sensors, tracking, and physical-space monitoring (facial recognition, device tracking by WiFi and Bluetooth sensors, license plate readers, purchase and credit card data, and more), odds are pretty good that an online persona could be narrowed down to a few score of potential targets reasonably quickly by a motivated entity. Doing that at scale might be more challenging, but seems to be at least roughly possible by some state-level actors.

And that level of surveillance many well not be necessary, only the threat of such actions.

For discussion, semantic analysis, time(s) of activity, correlation with other known factors (travel or commute patterns, power or communications outages correlating with non-active periods, and the like), there's a lot of data to go on.

The biggest protections seem to me to be far less technical measures such as encryption, obfuscation, and pseudonymity, than they are strong privacy laws, civil rights, legal protections, rule of law, and civil institutions which are strong, robust, highly-trusted and trustworthy, effective, and dedicated to their mission.

That's not to say that technical protections aren't necessary; I absolutely believe that they are. However they are not sufficient, and often prove to be highly brittle: affording strong protection until at some point, whether due to a technical fault or lapse in tradecraft, they aren't. At that point the jig is up, and absent the social institutions in my previous 'graph, vulnerability is absolute.

We cannot live without trust. An absolute faith in anonymity is the false belief that we can.


This sounds something like an oft-repeated truth.

"A single death is a tragedy, a million deaths are a statistic." - indicating that people tend to have less emotional investment on larger scales.

There was more on my mind, but I forgot.


Imagine using machine learning to create shadow profiles of users. You don't need to focus so much on the content of their messages directly, but rather you score the replies that come after a user's message.

When a user's shadow profile reveals them to be a conflict amplifier, or one who responds to other amplifiers, then the usage of the site is throttled subtly for them in a way that discourages engagement. For example you could lock the reply component for 10 seconds on page load, or don't show replies that are less than 90 seconds old, and other small nuances.

All of this is to say, I bet there is a technological solution, contrary to the author's assertions.


Doesn't have to even be that complicated. Remember when YouTube comments used to be a dumpster fire?

I'm pretty sure YouTube is doing sentiment analysis on all comments and artificially promoting the ones that tend to be positive.


This practice leaves a bad taste in my mouth. Presents an inauthentic vision of what people are.

Anger and other negative emotions are part of the human condition. Suppressing their expression doesn't change the underlying issue that caused the emotion.

Makes YouTube feel more inauthentic and bland.


The prior algorithm was also "inauthentic". There is nothing about anger or negative emotions that are inherently more authentic than positive emotions. And because anger tends to increase engagement (be less "bland"), algorithms tend to amplify anger artificially in order to juice their own metrics. This is why Twitter was a shithole even before Musk took over.


Why do you speak as if the human condition experiences only positive or only negative emotions?

It's everything. Trying to stifle any of them leads to a bastardized experience.

And YouTube comments used to be more raw and unfiltered than the constantly-promoted rage-bait that has defined Twitter for much of the past decade.


I think Google is doing this to protect creators. There have been a lot of people speaking out about burnout and severe mental health struggles on YouTube for a while now. A big contributor to this is toxic engagement from the community. Some people get very upset when they see a whole bunch of negative comments directed at them.

You might want to say “these people should just suck it up” but that’s not a road Google wanted to head down. They see healthy and happy creators as more productive and hence more profitable. Hiding downvotes and sentiment analysis on comments are two ways they can protect creators from this stuff.


I call shenanigans on that.

I constantly see creators stress over unclear guidelines from YouTube with inconsistent enforcement on what content is or is not allowed or will or will not be monetized.

If YouTube's priority was creators they'd give clearer guidelines to creators and broaden the range of personal expression that can still get ads put on it.

No - YouTube's push for positivity at all costs is to appease investors and media critics who've complained about comments and downvote campaigns and to appease advertisers who were seriously uncomfortable with the media attention given to them during the initial media push for the adpocalypse.


It's concerning, yes. But before this, YouTube comments were complete toxic garbage fire for _years_. It's definitely a much nicer experience than it used to be.


A very good reflection of humanity to be honest. Without this, I fear that small deviations from positivity will be met with downvoting and hate.


> Presents an inauthentic vision of what people are.

People present an inauthentic vision of themselves in real life. They just limit themselves out of social self-preservation.

They don't do that online, so we need machines to pick up the slack.


that's only if you're viewing "people" as a whole. If you instead view each commentor as an individual, it only makes sense that you would promote what improves the health of your service and punish those who stir up strife.

In general, anyone can find some reason to criticize even the best things, and if there's one lesson humanity can take from social media it's that negativity and cyber-bullying is contagious. I am completely in favor of YouTube for creating a comment section that I actually enjoy reading; one that enhances the video-watching experience. Not even a single flavenoid of bad taste in my mouth about this.


People suck. Unfortunately for the authentic types, nobody wants a product that sucks.


People suck online. It's rare to have interactions IRL that are as bad as what you see in the old Youtube comments section. So which is more authentic? How people behave in person or in unfiltered comments/fora?


People suck in real life too, there is a threat of getting a punch in the face in real life that prevents troll behavior.


I think that's part of it, and part of it is that we curate real-life interactions a lot more than online. There are plenty of places I would never show up to / people I wouldn't talk to in real life (precisely because they would be the sort of people who troll online).

Whether that filtering is done explicitly or implicitly (i.e. neighborhood wealth)


Thanks for highlighting that point, the geopolitical filter,along with 'physical availability' (as you stated) are likely larger contributing factors than I had initially considered.


People suck online and off.

People suck far more in some environments, online and off, than others.

Which is to say: there's an inherent potential for sucky behaviour, but there are specific circumstances which really seem to amplify and trigger it.

Something like locusts: a behavioural transition of a species under the right environmental stimulus.

Brief (<4m) videos, NatGeo: <https://yewtu.be/watch?v=uURqcI08IC4>, also PBS: <https://yewtu.be/watch?v=dt6zCJ2VHok>, and Attenborough/BBC: <https://yewtu.be/watch?v=lAI6W2TOkh4>.

<https://news.ycombinator.com/item?id=22206555>

<https://news.ycombinator.com/item?id=16239835>

It's pretty clear that the people who engage in toxic behaviours online are no different than they were prior to the emergence of those environments. It's the environment itself which triggers that behaviour.

That's baked into HN's philosophy:

"As a rule, a community site that becomes popular will decline in quality. Our hypothesis is that this is not inevitable—that by making a conscious effort to resist decline, we can keep it from happening." <https://news.ycombinator.com/newswelcome.html>

One of dang's fairly frequent observations is that HN tends to operate at the edge of chaos:

- "if moderation doesn't evolve as a community grows, one ends up with the default dynamic of internet forums: decay followed by heat death." <https://news.ycombinator.com/item?id=20435202> (2019)

- "it's almost impossible to keep this place from collapsing" <https://news.ycombinator.com/item?id=35164049> (2023)

- "Trying to keep the bottom from falling out on a public forum is harder than it perhaps sounds." <https://news.ycombinator.com/item?id=9712216> (2015)

- "[T]he internet doesn't do such fine distinctions. Please just keep away from that rail." <https://news.ycombinator.com/item?id=13605136> (2017)

- "If 500-point stories on hot topics were dispositive, HN would be a 500-point-stories-on-hot-topics site. It isn't that kind of site, and intervention is required to keep it from going that way." <https://news.ycombinator.com/item?id=14306144> (2017)

- "Our job is to somehow balance the conflicting vectors. That's not so easy, and also not so easy to articulate. The idea is not to maintain a centrist position, it's to try to keep the community from wrecking itself via ideological fracture." <https://news.ycombinator.com/item?id=34025076> (2019)

- "It's hard enough to keep these threads from incinerating themselves" <https://news.ycombinator.com/item?id=30436973> (2022)

- "The important thing is to keep the site from burning in the first place." <https://news.ycombinator.com/item?id=28932445> (2021)


> pretty sure YouTube is doing sentiment analysis

could be. I think it's mainly the phone number requirement within the past few years. The bad accounts have washed out. It's also likely why Google will soon delete old inactive accounts, created before this policy.


Positive or negative ? Positive comments don’t generate interaction. If a comment was positive I’d like and move on not comment. Negative comments in the other had generate a lot of interaction. If I was to guess, Google as a marketing company loves interaction.


Engagement is a really poor proxy for value.

I've had an awful lot of engagement that's produced little value. (As I get older, I try to avoid that, increasingly. Not always successfully.)

I've had tremendous value from some very brief engagements, often one-liners or casual remarks, though also when someone shares a deep knowledge of a subject or a truly insightful personal experience.

Those are all exceptionally valuable, but in terms of "engagement metrics" such as replies, time-on-site, etc., they're often negatives. To turn a phrase: feed a person endless questions and challenges, and you'll keep them on site for a day. Provide them the answer or tools they need, and they disappear forever.

(Dating / matchmaking sites face a related version of this problem.)

With time, people who do value useful information come to realise the timesuck nature of high-engagement, low-value sites (o hai redditz) and avoid them like the plague.

Side observation: I'm playing with FastGPT (largely because it doesn't require registration to use, so I can't compare it to ChatGPT or other registration-required generative AIs), and one of the things that's useful about it is that it gives specific answers to specific questions, rather than sending me off on an endless quest through low-grade online sources.

Or even relatively high-grade ones such as Wikipedia, which might answer the immediate question but tend to prompt more. Curiosity ain't necessarily bad, except for cats....

What generative AI does that General Web Search does not is actually quench the thirst. Which is useful from a personal value perspective, though possibly a shock to the system for both online search and content providers.


I believe their point was that we no longer see as much nonsensical hate in YouTube comments anymore - or at least that rings true to me and I've actually wondered about it for awhile


You min/max for engagement, but get a local max with negativity, so you increase your variability to find another peak


So that's why every comment has seemingly become a variation of "Can we all just appreciate the effort that X puts into their videos"? I guess it is better than flame wars but that's a low bar. These comments are just as uninteresting to read as the flame wars they replaced.


It is better than flame wars, so unless you have a suggestion, I don’t know what else they could do.

But I disagree with your premise. I know several YouTubers with reasonably large communities that have incredibly fun and wholesome interactions in the comments, mainly made possible by this sentiment analysis. Negativity and positivity both have feedback loops. If you display negativity, people will be more negative; if you display positivity, people will be more positive.


>I don’t know what else they could do

They could remove the comment section entirely. They've had some version of a comment section for almost two decades now and at no point during that time have they succeeded at cultivating a platform wide culture for interesting comments.

Flame wars were obviously bad and negative. Generic positive comments are slightly better but they still do not say anything interesting related to the content of the video.

At this point, comments could just as well exist out-of-band on other platforms that have figured out how to foster discussions better (hint: extensive manual moderation is required, which Google will never provide).


> They've had some version of a comment section for almost two decades now and at no point during that time have they succeeded at cultivating a platform wide culture for interesting comments.

I'd propose there is no need for a public comments section. I honestly don't understand this feature creep.


You can't call it feature creep if it's been there since literally day 1.

Some channels use the comment section extensively. PBS Space Time for example does a Q&A almost every episode where they answer questions posted as comments from previous videos.


Almost certainly this would negatively impact some engagement metric that is tied to someone’s sense of self worth, and this will not happen despite being clearly the best thing for the product.


Identifying who inflames or defuses discussions is only part of the puzzle, and there are systems which eschew algorithmic content selection (including the Fediverse from which my example is taken) where that doesn't really hold. Admins might benefit from such tools, however.

But the larger problem remains that people really can't and don't grasp the distinction between truly-global-scale-public and intimately private within online discussion contexts, and they've kept on not grasping this for decades now. You can go back to the 1960s with Joseph Weizenbaum and the ELIZA Effect as an example of how not only how much people will disclose personal and intimate details online, but with how engaged they get with such systems and the discussions.

Another possibility for AI is, of course, to provide an interactive discussion mechanism itself, much explored in fiction and film. Any number of issues could arise there.

One issue I'd identified years ago is what I call the Pennyworth Problem. Bruce Wayne has a private personal butler. AI and centralised computing services could give us all Pennyworths (and we do already have Alexas, Siris, Cortanas, and, hey! Google!). But where Pennyworth might hypothetically betray Wayne specifically, PennyworthAI could betray all of Gotham, all of the United States, and all of Earth, whether to the legitimate operators of the service, to a government demanding control over it, or to some third party identifying an opportunity and availing themselves of it. And do so with disclosure rates and fidelity hugely greater than that of any human. See: <https://news.ycombinator.com/item?id=15513270>

Which would be another dimension of the general intimacy problem.


If there was a technological solution, Dan probably would’ve found it by now. But he’s still talking to people on an individual basis.


I suspect that many social networks identify conflict amplifiers and boost them. Every conflict is more page views, after all.

(This is the bluetick adverse selection problem)


Shadow profiles assume that people don't change. Or people never have a bad day.

Although it's a solution, its certainly not how people work in the real world.


This is straight up psychopathic to say the least. You are suggesting selectively manipulating the behaviors of certain people to suit your value system.


> This is straight up psychopathic to say the least.

This is the essence of human social organization, since prehistory.


From the title I was expecting a study on polyamory or the like haha.


I'm not sure why hut archive.ph just never works for me. Am I the only one with that problem?



As the author and submitter of that piece, no, you're not:

<https://news.ycombinator.com/item?id=37148681>

I happen to be in a somewhat unusual interval of Archive Today lucidity (and accessibility) myself at the moment.


NB: Archive submitted as the original site is now inactive.


It's partly that there is only one way to express oneself in an online textual conversation. With text. If 2 people are loudly and annoyingly having a (intimate) conversation on say the bus, others around can use looks, physicality etc to communicate around them creating a shared parallel experience ce of WTF.

However with no way to do that online just encourages text, joining in the conversation.

Perhaps there are alternatives with avatars, GUIs etc.


In intimate conversation —a case where both or all parties in the same space and time — there are additional ample side-channels for subtexts to be communicated, including both conflict-diminishing and conflict-escallating ones.

From TFA, 5th paragraph, by some counts.

And yes, I frequently read without comprehension, scan quickly, or misinterpret as well. The problems I'm describing seem to be universally, or at the very least quite broadly distributed.


Was B what we know as a “reply person”? I think it's not a question of scalability but the sensibility to develop skills needed to “read the room”. Just because the thread was public, doesn't mean it's appropriate to join. In many cases people post to share or vent and even if it sounds like a question, they probably don't want another opinion.


So, if it is considered inappropriate to join in, why was the thread public in the first place?


I am guessing, for the same reason I can have a picknick in the local park with a few friends and discuss intimate matters. This takes place in public because that is convenient, but I still don't appreciate random passers-by interjecting their opinion. It's fine to listen in though, if you're sitting close by. Most people intuitively understand this.

Online, this intuition sometimes disappears, and people feel that the fact that something is readable to them, implies it is meant for them to interact with.


a better example is a meetup in a closed room. (i just attended one of those). you are clearly part of the group, but you get to overhear private conversations, and sometimes it is appropriate to join in, and sometimes it isn't. the difference is that when i walk around the room from group to group, i get signals if i am welcome to join or i am being ignored. these signals are missing online.


But does the comparison really hold? IMO, you'd have to have your "intimate" public discussion carved in stone or something to be comparable. Your example is more like SnapChat or any other platform that limits the lifetime of postings.


Both your comment and the one you're replying to illustrate perfectly (if perhaps unconsciously) the problem I'm attempting to convey in the essay: Online interactions give the appearance to some of an intimate discussion whilst of a public discourse to others, and the clash of those perceptions quite frequently leads to unpleasant disagreement, frustration, conflict, and confusion.


People are supposed to read my opinions, but not have any of their own. They're supposed to give me attention, without taking any away from me. It's for their benefit, of course, but their words could never benefit me.

I am the all-smarmy fount of narcissism.


I think, even if very sarcastically worded, you're onto something.


This is a great post.

People like to take out their anger on Facebook/Tiktok but the bigger issue is that human communication doesn’t work well at scale.

Human experience is temporal and locality sensitive (hence the distinctions between us vs them, such as boomers/millenials, west/east), and scaling it to different temporal and local contexts inevitably leads to hostility and tribalism.


For hundreds of thousands of years we lived in groups of dozens. Only in the past few hundreds years have larger cities become where most people live. Mass person-to-person communication is only a few decades old. Our ape brains are very adaptable, but the modern world is a lot to ask of them.


It was only a few years ago that I realised that every major advance in communications, going back to speech and arguably before, had a profound influence on human (or pre-human) society, culture, and interactions.

Shortly after I discovered Elizabeth Eisenstein, who'd pretty much developed that thesis, off the work of Marshall McLuhan, Harold Innis, and with parallels to Harold Lasswell and other media theorists.

More on that / links / references: <https://news.ycombinator.com/item?id=33685661> and <https://news.ycombinator.com/item?id=34906482>.


Checkout the group size theories about humans’ Dunbar number

Dunbar figured out that within apes, the one factor that correlates to every species brain size, is group size

If you apply the formula to humans (ie. measure brain size and calculate the group size), it gives you a number around 150


It remains to be seen if humans are capable of primarily identifying themselves with the "humanity" tribe


> People like to take out their anger on Facebook/Tiktok but the bigger issue is that human communication doesn’t work well at scale.

How long did it take CSS to be able to center something vertically on the page?

Going to go out on a limb and say that human communication at scale is at least slightly more difficult than that.

Patience!


> How long did it take CSS to be able to center something vertically on the page?

https://www.w3.org/TR/REC-CSS1/#vertical-align

"vertical-align" has always been around since the beginning if you're okay with setting the line height to the height of the page.

The original model for style might have been text, but that didn't stop anyone from doing what they needed. The question has always been more whether you really want an element vertically centered to the page rather than to a page section.


Even better, don’t discuss personal issues on a public forum. Worst case use DMs while being aware that screenshots exist.


I'm using intimate in its sense of mutual relation rather than the content of discussion, though these aren't fully independent.

Borrowing from the Online Etymological Dictionary, "intimate" meaning "closely acquainted, very familiar", and which comes from the Latin intimus: "inmost, innermost, deepest".

<https://www.etymonline.com/word/intimate>

That is, an intimate relationship or conversation is one in which the participants are both engaging at their deepest and most personal level.

This contrasts strongly with a common experience online and in broadcast media of parasocial relationships.

Tom Scott has an excellent discussion of this in his 2019 video "There Is No Algorithm for Truth", at 33m36s: <https://yewtu.be/watch?v=leX541Dr2rU&t=33m36s>. Well worth watching.

Based on Donald Horton and R. Richard Wohl, "Mass Communication and Para-Social Interaction" (1956): <https://archive.org/details/donald-horton-and-richard-wohl-1...>


The ads on this page make it unreadable on a mobile device.


We have a word for it. "Harmony".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: