Your point seem superficially valid, but where do we go from there?
>The worst thing about being smart is how easy it is to talk yourself into believing just about anything. After all, you make really good arguments.
>EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything. And from there, you can justify all kinds of terrible things.
Should we not talk ourselves into believing into stuff? Should smart people specifically avoid changing their beliefs out of fear of "justify all kinds of terrible things"?
>I'd love to believe in effective altruism. I already know that my money is more effective in the hands of a food bank than giving people food myself. I'd love to think that could scale. It would be great to have smarter, better-informed people vetting things. But I don't have any reason to trust them -- in part because I know too many of the type of people who get involved and aren't trustworthy.
So you don't trust donating money to food banks or malaria nets because "don't have any reason to trust them", then what? Don't donate any money at all? Give up trying to maximize impact and donate to whatever you feel like donating to?
> Should we not talk ourselves into believing into stuff? Should smart people specifically avoid changing their beliefs out of fear of "justify all kinds of terrible things"?
It's simple really: just be skeptical of your own reasoning because you're aware of your own biases and fallibility. Be a good scientist and be open to being wrong.
> So you don't trust donating money to food banks or malaria nets because "don't have any reason to trust them", then what?
No, they don't trust that you can scale the concept of "food banks are more effective than I am" to any kind of maximization. You can still donate to worthy causes and effective organizations.
> Don't donate any money at all? Give up trying to maximize impact and donate to whatever you feel like donating to?
Yeah, basically. Giving is more helpful than not giving, so even a non-maximalist approach is better than nothing. Perfect is the enemy of good, aim for good.
>It's simple really: just be skeptical of your own reasoning because you're aware of your own biases and fallibility. Be a good scientist and be open to being wrong.
This just seems like a generic advice to me which is theoretically applicable to everyone. Is there any evidence of effective altruists not doing that, or this being specifically a problem with "really-smart-person"s?
>No, they don't trust that you can scale the concept of "food banks are more effective than I am" to any kind of maximization. You can still donate to worthy causes and effective organizations.
I'm not quite understanding what you're arguing for here. Are you saying that you disagree with effective altruists' assessment that you should be funding malaria nets in africa or whatever (ie. what they want you to do), rather than donating to local food banks (ie. what you want to do)?
>Yeah, basically. Giving is more helpful than not giving, so even a non-maximalist approach is better than nothing. Perfect is the enemy of good, aim for good.
To be clear, you're arguing for donating for whatever your gut tells you, rather than trying to maximize benefit?
But where do you draw the line at "overthinking"? I agree that "don't help the homeless guy down the street in favor of funding AI alignment research" is a bit unintuitive, but keep in mind that "don't help homeless guy in favor of helping 100 random guys in africa" is also unintuitive (at least to the extent that we needed a whole movement to popularize it). I'm not saying that AI alignment research is actually the most worthwhile cause to fund, but "convincing people to donate to unintuitive, but theoretically greater utility projects" is basically the reason why effective altruism even exists. If people already naturally donated to the highest impact charities rather than donating to their alumni or the local opera house, we wouldn't need EA because that would be the default.
>> EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything. And from there, you can justify all kinds of terrible things.
> Should we not talk ourselves into believing into stuff? Should smart people specifically avoid changing their beliefs out of fear of "justify all kinds of terrible things"?
The GP is talking about self-deception. And yes, we should not deceive ourselves.
Okay, but how does this translate into actionable advice? Nobody sets out to intentionally deceive themselves. Telling people that "we should not deceive ourselves" is basically as helpful as "don't be wrong".
One approach is to turn up the demand for intellectual rigor until it hurts. For example, I don't want to become a maths crank, so I am learning about computer proof checkers, Lean, ACL2, Coq, those things. But they are painful to use, it hurts!
To broaden the applicability, consider that a clever person goes through life with their bullshit checker set to medium. Normies try to persuade him of stuff, and he quickly spots the flaws in their arguments. But a clever person doesn't lock the setting at medium. They adjust it to suit their adversary. If it is a negotiation for a large business deal, the skepticism gets turned up to a high level.
One can imagine a situation in which an executive, Mr E, for company A discovers that company B has hired away one of company A's due diligence team. Whoops! Mr E thinks "They know what we look for, and will have made very sure that it looks good." One adjusts the setting of ones bullshit detector not just on the raw ability of ones adversary, but also to whether they have access to your thought processes, that they can use to outwit you.
Assume for the sake of argument that the previous paragraph is for real. Adjusting your bullshit detectors to allow for an adversary reading your secrets is an option. Then it leads to actionable advice.
How do you set your bullshit detector when you are trying to avoid deceiving yourself? You use the settings that you use when you fear that an adversary has got inside your head and knows how to craft an argument to exploit your weak spots.
How about this: in swinger communities, they have safe words that they can use to transcend the playtime aspect of reality - how about we develop something similar for internet arguments, a term that is mutually agreed upon in advance, and when uttered by a participant it kicks off a well documented (and agreed upon) protocol where all participants downgrade System 1 thinking to zero and upgrade System 2 thinking to 11...and, all participants carefully monitor each other to ensure that people are executing the agreed upon plan successfully?
This general approach works quite well (with practice) in many other domains, maybe it would also work for arguments/beliefs.
Wouldn't this devolve into name calling almost immediately? On internet arguments it's already implied that you're bringing forth logical points and not just spouting off what you feel in the heat of the moment. Invoking the safe word is basically a thinly veiled attempt at calling the other party irrational and emotional.
> Wouldn't this devolve into name calling almost immediately?
If it did, 1 week ban. Do it again: 2 week ban. All according to the well explained and agreed upon on a point by point basis in the terms of service that the user agreed to...except in this case, the TOS are actually serious, and not the colloquial meaning of "serious" that people have grown accustomed to.
This would create substantial gnashing of teeth, but I anticipate before too long a few people would be able to figure it out (perhaps by RTFM), demonstrate how to speak and think properly, and a new norm would be established.
Besides: if 95% of people simply cannot cut it, I don't see this as a major problem. Cream is valuable, milk is more trouble than it's worth.
Two things that should never be underestimated:
- the stupidity of humans
- the ability of humans to learn
> On internet arguments it's already implied that you're bringing forth logical points and not just spouting off what you feel in the heat of the moment.
It's even worse: it is perceived as such! This is the problem though: people have never been taught how to reliably distinguish between opinions, "facts", facts, and the unknown (the latter is typically what catches genuinely smart people). So: offer an educational component, maybe integrated into the onboarding process.
Too big of a hassle? Best of luck to you elsewhere (provide links to Reddit, Facebook, Hacker News, etc).
> Invoking the safe word is basically a thinly veiled attempt at calling the other party irrational and emotional.
Take a wild guess what response a comment of this epistemic quality (in the form that it is currently presented) would elicit under the standards I describe above.
Besides: I doubt any unemotional, rational people exist on the planet. It is not a question of "if" someone has these shortcomings, it is a question of "to what degree" they suffer from them. And should we expect any different from people? We don't try to create any of these people, and it's not like they have anyone to emulate.
> how about we develop something similar for internet arguments, a term that is mutually agreed upon in advance, and when uttered by a participant it kicks off a well documented (and agreed upon) protocol where all participants downgrade System 1 thinking to zero and upgrade System 2 thinking to 11
This used to be Godwin's law. Except by the time that's triggered, your System 2 thinking dialed you to 11 almost always tells everyone it's time to leave that discussion.
"Godwin's law, short for Godwin's law (or rule) of Nazi analogies,[1] is an Internet adage asserting that as an online discussion grows longer (regardless of topic or scope), the probability of a comparison to Nazis or Adolf Hitler approaches 1.[2]"
"a term that is mutually agreed upon in advance, and when uttered by a participant it kicks off a well documented (and agreed upon) protocol where all participants downgrade System 1 thinking to zero and upgrade System 2 thinking to 11"
I see very little similarity between these two things.
> Except by the time that's triggered, your System 2 thinking dialed you to 11 almost always tells everyone it's time to leave that discussion.
From my other comment: "if 95% of people simply cannot cut it, I don't see this as a major problem. Cream is valuable, milk is more trouble than it's worth."
Lots of people will leave, but there will be some who remain. It's a similar principle to quality standards when joining various organizations, or any other process that involves targeted selection.
> I see very little similarity between these two things.
I'm not sure what isn't clear: the trigger condition you're looking for is the first comparison to Nazis is made. Except as I said, by the time that point is reached I'm not sure productive discussion is possible.
> a term that is mutually agreed upon in advance, and when uttered by a participant it kicks off a well documented (and agreed upon) protocol ...
That term is a trigger condition for initiating a protocol. I suggested that the first comparison to Nazis is the trigger condition. What's unclear here?
I think maybe the problem is that you seem to be classifying a highly specific concrete instance of a very broad abstract class as being equal to the abstract class itself (and thus: equal to all possible subordinate concrete classes).
>> how about we develop something similar for internet arguments, a term that is mutually agreed upon in advance, and when uttered by a participant it kicks off a well documented (and agreed upon) protocol where all participants downgrade System 1 thinking to zero and upgrade System 2 thinking to 11
> This used to be Godwin's law.
In this case, playing of the Godwin's Lawcard would invoke my recommended process, as opposed to playing of the Godwin's Lawcard being an instance of my recommended process.
> Except by the time that's triggered, your System 2 thinking dialed you to 11 almost always tells everyone it's time to leave that discussion.
Again, this claim would invoke my process, and would be accompanied by a reminder that it isn't actually possible to read minds or the future, it only seems like it is possible when running on System 1 heuristics.
But then, both my logic and intuition suggests to me that what's going on here is that you and I are talking past each other, and if we were to eliminate all of the numerous flaws in communication (for example: your usage of "the" trigger condition instead of "a" trigger condition[1]), we'd discover we don't actually disagree very much. But ain't no one got time for that, under current protocols.
[1] A common response to this style of complaint ("pedantry") is that one should simply assume [the correct intended] meaning - but again, this has a dependency on mind reading, which is a false premise (that seems true during realtime cognition).
How about "don't go against conventional wisdom unless you have a good reason; the more conventional the wisdom, the better the reason needs to be"? Possibly combined with "be humble and give subject matter experts some credit, an hour on Google Scholar doesn't mean you've learned everything".
If the conventional wisdom is "don't order research chemicals from a lab in China then self-inject them", then maybe a plan to get a peptide lab to manufacture cheap Semaglutide is dangerous, even if you can't explain exactly why it's dangerous (in this case it's probably pretty obvious).
If, on the other hand, the conventional wisdom is "eat 6 - 11 servings of grain and 3 - 5 servings of vegetables a day", but many nutritionists are recommending less grain and there's new research out saying that much higher vegetable intake is good, maybe a plan to eat more vegetables and less bread is good.
> How about "don't go against conventional wisdom unless you have a good reason; the more conventional the wisdom, the better the reason needs to be"? Possibly combined with "be humble and give subject matter experts some credit,
I have a feeling that this is basically the generic talking point to use when your opponent is more radical than you. The opposite would be you accusing your opponent as luddites or whatever because they're too bought into "conventional wisdom". Neither are actually helpful epistemically because the line for "good reason" is entirely arbitrary, and is easily colored by your beliefs.
>an hour on Google Scholar doesn't mean you've learned everything".
>If the conventional wisdom is "don't order research chemicals from a lab in China then self-inject them", then maybe a plan to get a peptide lab to manufacture cheap Semaglutide is dangerous, even if you can't explain exactly why it's dangerous (in this case it's probably pretty obvious).
I think you're painting effective altruism with too broad a brush and giving them too little credit. I'm very skeptical that the typical effective altruist is ordering semaglutide from china or that the typical EA analysis on x-risk is based on "an hour on Google Scholar".
>If, on the other hand, the conventional wisdom is "eat 6 - 11 servings of grain and 3 - 5 servings of vegetables a day", but many nutritionists are recommending less grain and there's new research out saying that much higher vegetable intake is good, maybe a plan to eat more vegetables and less bread is good.
Hold on, all it takes to turn over "conventional wisdom" on nutrition is "many nutritionists" and "new research"? Does some well researched books like "The Precipice" or "What We Owe the Future" suffice here? I'm sure that among all the effective altruists out there, you can find among them "many" to support their claim?
> I have a feeling that this is basically the generic talking point to use when your opponent is more radical than you.
EA people would probably phrase it as something about how updating strong priors in response to weak evidence needs to happen slowly, but I feel the Bayesian formulation is a bit toothless when it comes to practical applications.
The broader point is that when your opponent is more radical than you on a factual issue[0], but they don't present any evidence for why, they're probably wrong. This isn't good enough in a debate but it's a fine heuristic for deciding whether to use opioids as performance enhancers.
> I think you're painting effective altruism with too broad a brush and giving them too little credit. I'm very skeptical that the typical effective altruist is ordering semaglutide from china
This is a fair criticism, but I didn't mean to apply it to the movement as a whole, only to the particular failure mode where some effective altruists (or more generally, rationalists) talk themselves into doing bizarre and harmful things that equivalently smart non-EAs would not. It's easy to talk about Chesterton's Fence but it's not so easy to remember it when you read about something cool on Wikipedia or HN.
> Hold on, all it takes to turn over "conventional wisdom" on nutrition is "many nutritionists" and "new research"?
I'm just looking for a heuristic that stops you doing weird rationalist stuff, not a population-wide set of dietary recommendations. It's okay if some low-risk experimentation slips through, even if it's not statistically rigorous and even if it's very slightly harmful.
The point is that there are two requirements being met: first, no strong expert consensus ("many nutritionists" was too weak a phrasing and I apologise), and second, if you ask a few random strangers (representing conventional wisdom) whether eating more vegetables and less bread is good for you they'll tell you to go for it if you want to, while if you ask about using non-prescription opioids they'll be against it.
That's the same problem as before. Outside of maybe fundamentalist religious people who think their religious text is the final word on everything, everybody agrees that "science" is the best way of finding out the truth. The trouble is that they disagree on what counts as science (ie. which scientists/institutions/studies to trust). When the disagreement is at that level, casually invoking "science" misses the point entirely.
That might be true, but it's a non-sequitur because this thread is talking about the epistemic practices of a particular group. Whether "science" (the institution, method, or humanity in general) will eventually arrive at the truth is irrelevant.
Not really. Saying that relying on experts isn't needed is a common, self-deprecating thing scientists like to say, but it doesn't really work. Even Feynman wrote about having to deal with cranks who sent him letters in which the authors thought they had disproved relativity or something. Everybody's opinion isn't equal in science.
>The worst thing about being smart is how easy it is to talk yourself into believing just about anything. After all, you make really good arguments.
>EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything. And from there, you can justify all kinds of terrible things.
Should we not talk ourselves into believing into stuff? Should smart people specifically avoid changing their beliefs out of fear of "justify all kinds of terrible things"?
>I'd love to believe in effective altruism. I already know that my money is more effective in the hands of a food bank than giving people food myself. I'd love to think that could scale. It would be great to have smarter, better-informed people vetting things. But I don't have any reason to trust them -- in part because I know too many of the type of people who get involved and aren't trustworthy.
So you don't trust donating money to food banks or malaria nets because "don't have any reason to trust them", then what? Don't donate any money at all? Give up trying to maximize impact and donate to whatever you feel like donating to?