This reminds me of a prediction game experiment I heard that was described like the following. \*
The researchers presented the following to people.
f(1) = true
f(2) = true
f(4) = true
f(8) = true
And asked, what is f?
And the people will immediately jump in and test 16, 32, and then proudly declare that
f = x -> x = 2^n for some integer n
Forgetting to test f(3), f(5), etc.
With more examination it turns out that
f = x -> true.
\* I wish I could remember more of the details such as whether it was an experiment or just an illustration of one but it's not an easy thing to search for and I rely on memory and searching too much.
He uses the game to show that people do something akin to Bayesian updating over possible concepts and have certain intuitive priors (e.g., ‘even numbers’ is a priori more likely than {2, 7, 9, 31}).
This is briefly mentioned at the beginning of Kevin Murphy’s Machine Learning: A Probabilistic Approach, so you might also have encountered it there.
a similar thing happens with folks when debugging. They assume something, and test their assumption and can happily declare victory. Instead, they should be testing _against_ their assumptions to prove their theory wrong.
> Instead, they should be testing _against_ their assumptions to prove their theory wrong.
I think this mindset should be taught explicitly starting in grade school.
If you have an idea, think how you'd disprove it, and test it. If you can't think of how you'd disprove it, that's a strike against the idea. If you can't test it, that should at least make you suspicious.
The researchers presented the following to people.
And asked, what is f?And the people will immediately jump in and test 16, 32, and then proudly declare that
Forgetting to test f(3), f(5), etc.With more examination it turns out that
\* I wish I could remember more of the details such as whether it was an experiment or just an illustration of one but it's not an easy thing to search for and I rely on memory and searching too much.