Why would anyone think that a large overall pool of happiness is somehow better than a high per capita happiness? This seems like the kind of thing that's incredibly obvious to everyone but the academic philosopher.
They do not, thats the point. If you start with a simple and reasonable sounding premise ('it is ethically correct to choose the option that maximizes happiness') but it leads to obviously absurd or inhuman outcomes then you might not want to adopt those principles.
Your second sentence rankles the hell out of me, you're only able to make that snap judgement to this because of your exposure to academic philosophy (where do you think that example that demonstrates the problem so clearly comes from?), but are completely unaware of that.
The bullshitters aren't puzzling at seemingly simple things, they're writing content free fluff.
Maximizing for per-capita happiness just leads to the other end of the same problem - fewer and fewer people with the same "happiness units" spread among them. Thus we should strictly limit breeding and kill people at age X+5 (X always being my age, of course).
It's actually a hard problem to design a perfect moral system, that's why people have been trying for literally thousands of years.
I suggest in general, when approaching a conclusion of a field that you find unintuitive or overcomplicated, to try to recognise that thought pattern and swallow your pride. Its an incredibly common reaction of educated people in one area to see another area and be like "wow why are they overcomplicating it so much they must all be blind to the obvious problems" as though literally every new student in that field doesn't ask the same questions they're asking. Heck I do it all the time, most recently when starting learning music theory.
You may feel so certain that they're just too wrapped up in their nonsense that they can't see what you see. But at the very least you will have to learn it the way they learned it if you want to be effective at communicating with them to articulate what you think is wrong and convince people. And in doing so you'll likely realise that far from some unquestioned truth, every conclusion in the field is subject to vigorous debate, and hundreds and thousands of pages and criticisms and rebuttals exist for any conclusion you care about. And for it to get as big as it is such that you, a person hearing about it from outside, there must at least be something interesting and worth examining going on there.
For a prime example, see all the retired engineers who decide that because they can't read a paper on quantum physics with their calculus background, the phsyicists must be overcomplicating it, and bombard them constantly with mail about their own crackpot theories. You don't want to be that person.
It's just a question of if you value other people existing or not. If you don't, focus on per-capita happiness, if you do then you focus on meeting a minimum threshold of happiness for everyone.
I don't see how you couldn't value other people existing – I think they have just as much of a right to experience the universe as I do.
Has that belief led you to a lifestyle in which you are just barely happier than miserable so that you can lift as many others as you can out of misery?
In this particular case, it's because the success of an ad-funded service depends on the amount of users it has.
If you don't like the repugnant conclusion you have to change something in the conditions of the environment so that you make it not be true. Arguing against it and calling your refutation obvious doesn't do anything.
I agree. The math that applies to corporate profits is not the same that should apply for human happiness.
But we have to acknowledge that the weird philosophical thought experiment that can't possibly convince anyone except weird philosophers turned out to be convincing to other entities after all.
Compare the trolley problem, a famous thought experiment that people used to laugh at, up until a couple of years when suddenly important people began to ask important questions like "should we relax the safety standards for potentially life-saving vaccines" and "how much larger than Y does X need to be so that preventing X functionally illiterate children are worth the price of Y dead children"
First, the phrasing is confusing, because it's not clear whether people with very low happiness measured in terms of N are what we consider unhappy/sad, which is actually negative utility. I believe with this measure, positive N means someone is more happy than they are unhappy.
Second, what's "obvious to everyone" is just based on how you're phrasing the question. If you suggested to people it would be better if the population were just one deliriously happy person with N=50, vs 5 happy people with N=10.1, people would say obviously it would be better to spread the wealth and increase overall happiness.