That doesn't sound like a solid indicator of an issue. Two friends could be having a back and forth discussion with no harassment or conflict. You'd end up with 25+ replies and 1 like.
What's the point of locating in Silicon Valley and hiring the smartest programmers in the world if you can't figure out an algorithm to make hateful posts not show up as often in someone's feed?
I doubt it's because they can't. The more likely answer is they don't want to.
It's actually a hard problem, similar to porn detection without using humans (see: https://en.wikipedia.org/wiki/I_know_it_when_I_see_it). Blocking purely based on keywords or Bayesian filtering usually paints too broad a stroke and ends up limiting well-intended free speech (I once had a comment blocked for arguing AGAINST racism!). It's similar to the "blocking all mention of sex also blocks sex education" problem. It seems to take a fully-fleshed-out intelligence to grasp the true meaning behind even something as innocuous-looking as a written sentence.
Your assumption that people more intelligent than you "should have figured this out by now" belies the very problem- no one has yet come up with a good automated solution for this. If YOU do, you'll be a millionaire.
Again, I disagree. Twitter came up with a way to make some posts more widely shown, and you're trying to tell me they don't have a way to make some posts less widely shown? As someone else said, if there are a lot of comments and few likes, don't put it in the trending feed. That's one solution for free, and I don't even work for Twitter. If it's two people having a conversation back and forth, the broader Twitter audience doesn't need to see it. It's not censored, it's not hidden, it's just not broadcast either.
People have become millionaires, billionaires even, for the exact opposite of what you say. You become rich by making sure controversial content is spread as far and wide as possible, because hatred and fear sell as entertainment. People get addicted to it. You don't become rich by filtering out hateful content, you become rich by enabling it and spreading it because that's what people want (as long as they're not the target).
If you limit yourself merely to detecting abusive tweets, perhaps it is hard. But there are plenty of ways to adjust the way the social dynamics work that would decrease this kind of behavior but, I believe the argument goes, most of those would also decrease _engagement_.
The real problem is the incentives, both for Twitter and for people interacting on twitter. The solution is probably _social_ rather than technical, but as long as Twitter wants to keep your eyeballs on their site for as long as possible (so they can sell ads or whatever to advertisers) a whole host of solutions are going to be verboten.
By way of example, Hackernews literally has a feature to just lock you out of the site if you are using it more than you want to. That is great for us, the users. But twitter would never do such a thing.
I would imagine the issue is certainly because they can't. What is hateful to you is charming and encouraging to someone else. Social norms and cultural differences are gigantic. Look at the recent controversy with the conservative guy on YouTube who referred to a reported from Vox as their 'queer Latino reporter' and it was seen as hate speech... despite the Vox reporter openly and frequently labelling themselves as Voxs queer Latino reporter. How is a computer supposed to interpret that? How is it supposed to know that when person A says something and when person B says the exact same words, referring to the exact same subject, that the greater context of the speakers background political affiliations and those of their audience actually determine the 'meaning' behind the statement, not the statement itself?
This is not an easy problem, and it does no one any good to pretend that it is. Tackling the issue also requires those considering it to consider other social situations. Is someone supporting equal treatment of women in Saudia Arabia practicing hate speech against the conservative ruling party? If we'd had systems that let us actively regulate speech in the way we can now, would it have been appropriate to block Martin Luther King Jr. because his message was growing civil disobedience and causing families to bicker over race politics? Why are we so damn certain that any argument today will necessarily be decided by a regression rather than a wider acceptance of more progress? Change in human societies is always ugly, always comes at the cost of pain and strife, and on the balance has usually moved us in a forward direction. I can't say the same for censorship. Censorship makes impossible any forward movement, and only serves to leave regressive mindsets to fester and make-believe that they have more support than they actually do.
We're not talking about banning these posts, or hiding them, or censoring them. Just not showing them as widely as they do other posts. It doesn't even need to go as deep as "this is hateful", but rather "this has the potential to be hateful" or giving the author the ability to control how widely the message is being shared.
I see these people here trying to debate solutions like good engineers, but unless they work at Twitter, it's a waste. We can guess all day and come up with a million solutions but when it comes down to it, Twitter absolutely has the ability to control posts that spiral out of control. What they don't have is the desire to do so.
It can be smoothly related with probability of post being undesirable. So if algo thinks it's 50% undesirable simply count it as "half a weight." Or tune this function to be whatever you want. Twitter/etc already makes arbitrary choices about what gets shown.
For every mean-spirited hate post that gets promoted, another tweet about knitting is not promoted. Why is censorship only bad if the content is hateful?
"How is it supposed to know that when person A says something and when person B says the exact same words....."
I was about to argue against this but then realised its worse than you suggest.
If I as a white person used the N word to describe a black person I would be labelled a racist, whereas a black person can say it all day long. But even if I black up and say it, its even worse. But then with gender the rules are almost reversed, I can declare myself a woman and expect that to be somewhat respected.
And on the internet no one knows you're a dog, or a transvestite in black face.
We're at a stage in "AI" where we can fool image detection with modifying a single pixel, where Google AI mislables black teenagers as gorillas and bing overlooks child porn, and where self driving cars still self drive into things.
All while "learn to code" is used to harass in some contexts...
But we expect twitter folks to just figure out an algorithm to filter out "hateful" posts, when there isn't even an accepted definition of hateful? The first replies it would filter is all the people telling Trump how bad and evil he and his policies are, while the people who try to actually harass people will find quick and easy ways to game the system, as they always have; that's my prediction of a 'best' case outcome.
additionally, there's no real need for technically public discussions to be promoted or made more public, so it's not really a failure state if the algorithm doesn't promote a high reply rate exchange between two users exclusively.
In general, whenever people say something is relatively simple, and yet this thing has not happened, it can often be a sign that we are missing some hidden complexities.