Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're alluding to, without naming it, the "Dunning-Kruger" effect: you need to be knowledgeable enough to determine the complexity of a field. Beginners will often overestimate their ability.

https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect



No no no!! Oh man, I’m really sad my comment invoked DK, I don’t think this is the same thing at all, and I have a strong dislike of this paper because it’s both misrepresented and almost universally misunderstood. (And the authors are complicit in encouraging this misunderstanding, because the paper made them famous.)

The paper did not show confidence being inversely proportional to knowledge or skill as many people believe. It shows a positive correlation between confidence and skill.

But the paper itself is very hyperbolic and draws invalid conclusions that are unsupported by their own data IMO. You should read the actual paper, it’s enlightening to look at what they actually tested, and compare that to how they wrote about it. They didn’t test high skill tasks. And they didn’t test incompetent people. They tested only Cornell undergrads who were volunteering for extra credit. They didn’t control for this completely skewed population sample in any way.

They also didn’t test people’s estimation of themselves either. Not at all. They asked people to rank themselves in a group of others, without knowledge of the others’ skills. This simply means people were guessing, not that they were overestimating themselves!!

This is a great post explaining what’s most likely happening with DK: regression to the mean. https://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-...


Ironically, the best evidence for the DK effect comes from people confidently referring to it as though it was a real thing.


If it’s not a real effect, then it can’t be used that way because it’s not true. And if it is a real effect it also can’t be used that way -- again, the data in the DK paper shows a positive correlation between confidence and skill, which means if people seem sure about it then chances are higher that they’re right. People can’t be blamed for getting the wrong idea from the DK paper. The authors spun the narrative very carefully to imply or seem like high confidence is suspect, while not actually claiming it in the paper, and their data shows that shows people who are more confident are also more skilled on average.

The idea that certainty and confidence is some kind of tell that reveals incompetence is just something people really want to believe. It feels good, but it’s specious and not actually true. Everyone knows someone who’s been arrogant and wrong, and it feels so good to think that science somehow backs the idea that arrogance shows someone is faking their expertise. Unfortunately, the only science that Dunning & Kruger did shows the opposite, that higher confidence is associated with higher skill, regardless of the fact that “skill” and “confidence” weren’t measured in any of the ways people would naturally assume when they read those words.

I don’t blame @MayeulC for believing what Dunning & Kruger claimed, especially when there’s a Wikipedia article backing it up and making it seem like an accepted and solid idea. I feel bad they got downvoted because of my comment, maybe throw an upvote their way if you feel like it.


Oh, I don't care much for Internet points, but I like being pointed out wrong.

I might have been a bit too direct referencing DK, and I think I understand your concerns. I haven't read the original paper, and I'm willing to accept your criticism of it at face value for now.

However, what I named (and what I believe is widely understood as) the DK effect is quite similar to what you pointed out in your original post. Even if that's not what the original authors intended, I believe that giving names to such psychological phenomena/cognitive biases is valuable to contextualize a discussion: "we're going to discuss the Dunning-Kruger effect" is shorter than a full introduction. Those who don't know what DK is can look it up (or read the full introduction).

Moreover, I have myself experienced what you describe multiple times, so I am willing to give additional credit to the idea (that I was describing as DK), even if quantifying cognitive biases is probably a lost cause.

> certainty and confidence [...] reveals incompetence

That is certainly not a conclusion that I am willing to make, nor something that I have ever seen expressed anywhere. The only thing I associate DK with is underestimating the deepness and breadth of a topic when you know it only superficially. And finding examples of this is quite easy.

> The authors spun the narrative very carefully to imply or seem like high confidence is suspect

Again, I was not aware of this, nor I am under the impression that this is a conclusion that people jump to (it really seems like a stretch). At least in my circles. I will document myself a bit more, and decide if I should keep calling this DK or not, thanks for bringing this to my attention.


That’s a very kind reaction, thanks for responding. Here are some notes that might help contextualize my reaction and feelings about the so called Dunning-Kruger Effect, and I hope helps on your exploration of whether you believe this paper’s claims.

I don’t think it’s related to what I was talking about because DK does not show people with less knowledge overestimating themselves, and it just doesn’t have anything to do with the ratio of how much you know vs how much you don’t know… it’s not hypothesizing at all about unknowns or about the boundary of knowledge, so the results can’t be seen as providing any evidence for that topic.

Subsequent papers to DK’s have tried to reproduce the effect, but using skills that require more expertise than what DK tested. Some of them found no such effect, and some found a complete reversal of the effect -- they found the experts underestimating themselves! (Impostor syndrome might be slightly closer to what I was talking about at the top, and impostor syndrome is opposite of DK). I believe the studies showing the opposite of DK are mentioned in the blog post I linked to, which I can’t recommend enough if you’re curious about DK.

The DK paper tested a total of 4 “skills”: the ability to get a joke (I’m not kidding), some basic grammar, some basic logical reasoning questions, and last but not least the ability to know how well you did on the previous questions relative to other people. The questions they actually asked aren’t in the paper, we don’t know what they were, but they did not test things that most people normally accrue expertise in like law, engineering, dance, architecture or medicine.

Only the 4th task - estimating others - is the one where they claim that people overestimate themselves. But remember these are undergrads who were asked to rank others, not to gauge their own ability (to, e.g., get a joke) in any absolute sense. Underestimating other people is not the same thing as overestimating yourself! And yet that’s exactly what they’re claiming.

The whole study had a little over 60 Cornell undergrads volunteering for extra credit in a psych class. The part where people ranked themselves only had half the participants, about 30, and it was done several weeks after the first three questions. Ignoring the potential problems with non-native English speakers, just think about how crazy it is to declare you’ve discovered a meta-cognitive bias for all of humanity after testing only 30 kids of all the same age and roughly the same background in a single Ivy League school in the US. Kids who go to Cornell are often rich, and often told they are smart for their whole lives, validated by getting accepted to an elite school. Why wouldn’t they overestimate themselves? Kids volunteering for extra credit doesn’t include the kids who were doing well enough that they decided not to volunteer, nor of the kids who were doing poor enough that trying to get extra credit was a waste of time. How do they know the kids volunteering for this study even took it seriously? They got the extra credit by showing up, not by being accurate. There are so many problems with the population sample, it seems nuts to draw any conclusions.

Here a link to the paper so you can read it and compare to Tal Yarkoni’s blog post. https://www.researchgate.net/publication/12688660_Unskilled_...

The main question to ask yourself as you read it is “is this the only possible explanation for the effect they’re discussing?” Take special note of how they prime the reader with suggested outcomes before they describe the methodology. They have a hypothesis from the beginning and appear to test the hypothesis, but did absolutely nothing to try to disprove it. I don’t think this paper would be publishable by today’s standards, and compared to hundreds of other papers I’ve read, this one stands out for how much it jumps to conclusions and waxes philosophical without enough evidence.


Reminds me of two things that readers of this post may also find interesting:

https://terrytao.wordpress.com/career-advice/dont-prematurel...

https://terrytao.wordpress.com/career-advice/be-sceptical-of...

Tao is specifically speaking to mathematicians, but some of the lessons carry to other fields (and perhaps other areas of life).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: