I recall conversations on Usenet decades ago about the Monty Hall problem[1] in which people gave elementary proofs that probabilities don't change by opening a door. Even from mathematicians and statisticians. People were very insistent that the analytical solution was simple and obvious and that switching doors didn't change anything.
The only thing that changed some people's minds was a program that simulated the Monty Hall problem. This was needed to get people to reconsider their proof when the claim was highly counterintuitive.
The Monty Hall Problem is a fun one because you can try to approach it from a purely analytical perspective and get one answer, while incorporating the whole situation (especially the fact that the final probability is not natural as they force the final decision into far fewer doors than originally present) and testing you can find a different answer entirely.
I suppose this is an interesting corollary with discoveries made by deep theoretical mathematics. While something may seem possible because "the math checks out" it could be only theoretically possible as it relies on some unnatural value to "be" possible in the first place.
Testing is where hopeful theories are smashed by reality until all that remains is the verifiable truth. Truly, why wouldn't we test?
Fundamentally, the trouble with the Monty Hall problem isn't that analysis comes to the wrong answer, it's that people often come to the wrong model when reasoning about it informally.
It's not any harder to do the "correct" analysis than to write up a simulation. It's mostly just easier to convince yourself that the simulation matches the problem description when it reaches the unintuitive result.
Fundamentally, I think the real trouble with the Monty Hall problem is that the assumptions of the game are not clearly stated. Because of this, people come up with different models.
That's absolutely right; further, if you explicitly model the behavior of the game show host, you can exhibit models under which "it's better to switch" and models under which "it doesn't matter if you switch or not".
Take a game show host who lets you choose a door, randomly reveals what is behind one other door, and then gives you an opportunity to change your choice. This game show host CAN (randomly) reveal the prize; he has equal probability of revealing ANY of the unchosen doors.
Say you are playing the Monty Hall game with this host. You choose your door, he opens another door, and it happens (purely by chance) that there is no prize there. Do you still believe that you have a 2/3 chance of winning if you switch to the other unopened door?
Isn't that a different problem entirely?
The original is that the host reveals a door without the prize.
Aren't you modeling an entirely different problem as opposed to modeling the same problem with a different model, since the problem states the parameters and you are changing those?
> Aren't you modeling an entirely different problem...
Not really, but read on:
You correctly state that in the Monty Hall problem, the host reveals a door without the prize. That's the same situation which I described in my previous comment.
Try thinking about it this way: Say you are the contestant on that show. You have never played the game before, and you will never play it again. So you don't know how the host behaves. You pick your door, he reveals another door, there is no prize behind it. You would have to ask yourself: did he deliberately open that door because it had no prize? Or did he just happen to open a door that had no prize?
Your best estimation of your odds of winning changes completely depending on how you model the behavior of the host.
However, with any type of host, the situation whereby "contestant opens door with no prize, host reveals another door with no prize" can still occur, and regardless of whether you deem that the 'original' Monty Hall problem or not, it is the most interesting way to define the Monty Hall problem. Call it the extended Monty Hall problem if you want: the situation described above has occurred, and you have to both define a model for the behavior of the host (and game) and calculate your odds under that model.
Here's a challenge for you: Can you find a model under which the contestant has 100% chance of winning by not switching to the unopened door?
This modelling ambiguity is resolved by do calculus, which makes a clear distinction between intervention and observation: https://arxiv.org/pdf/1305.5506.pdf
That's how it went when I was solving problems at the Statistics course at university. I modeled the problem perfectly, got the wrong result. Changed assumptions, got the wrong result. Checked the solution, its reasoning didn't make much sense anyway. Run a simulation, got an approximate result close to the correct solution.
You pick a door. Monty shows you a Goat. You switch or stay.
Monty will never show you the Car before offering a switch. He always shows you a Goat. It doesn't matter which Goat he shows you - it's just "not the Car".
If your first choice is a Goat, switching will win you the Car. If your first choice is a Car, switching will win you a Goat. You have a 2/3 chance of picking a Goat, so, effectively, you want to pick a Goat so that you switch to the Car.
For sufficiently analytical folks that works, but for lay people it tends to still be confusing.
The best way I’ve heard it explained to help people get it through intuition is by changing the number of doors and goats. Say there are 100 doors, and they all have goats except one, which has a car. You pick door 1. Monty then proceeds to open doors 2 through 48, skips door 49, and then opens the remaining doors. After all that, he stops and asks you, would you like to switch?
I always feel like there is something fundamental missing from the examination of Monty Hall problems.
I think it has to do with the difference between "probable outcome in reality" and "probably outcome based on personally known information".
Lets say when you get down to doors #1 and #49, Monty brings in someone new, with no information and says pick a door. For that new person, standing right next to you, doors #1 and #49 have a 50-50% chance, while for you they are a 2% vs 98% chance.
How can door #1 simultaneously have a 2% chance for you and a 50% chance for Bob? The answer is that the chance is not a single fixed property of the door itself- which is hard to wrap ones head around.
And for that matter, Monty Hall himself knows one of the doors is 100% and the other is 0%.
There is something missing: regular stats don't differentiate between doing things and observing things and these two are not at all the same. If I have a digital thermometer and I observe it to show a high temperature, then I will note an association between that and feeling warm. But if I merely set the thermometer gauge to a high value artificially, it's not going to make me feel any warmer.
I think it is more fundamental than that, and not even mathematical. I think the issue is that people conflate or blur the difference between reality and their models of reality.
Your personal, information limited calculation of the chance a car is behind door #1 has no impact on if there is a car behind door #1. Reality is binary and constant. There was always a car there, or there always wasn't.
Most people correctly intuit that of course the real probability that the car is behind door #1 cant change with reveled information. It isn't a quantum car. They just get caught up on the fact that predictive chance is a attribute of the model, not the real door.
The situation is now counterintuitive in the other direction: if Monty Hall had opened those 48 doors at random and they just happened to not contain the car, then there is no advantage to switching, though many people would insist otherwise.
> if Monty Hall had opened those 48 doors at random
The fact that Monty Hall opens the doors deterministically (not randomly) is KEY.
In the original problem, Monty ALWAYS opens a door with a goat. In using 50 doors, Monty would ALWAYS open doors containing goats, and not the car. It's not random.
Knowing it's not random, it should be very intuitive.
But the doors weren't opened at random. You know he won't open the car, because that's part of the rules of the game.
Let's demonstrate with a slightly different construction: You're no longer playing with monty, but with a demon. This demon wants you to lose, but also picked a very bad game for themselves. You pick a door, then the demon opens all-but-one of the remaining doors. Then, you can pick any door, open or closed, and you get what's in it.
If the demon opens doors at random, nearly all the time (with 100 doors) you'll see the car and be able to pick it directly. In this situation, switching between the closed doors doesn't really matter, but you'll usually know exactly which door to pick, because you can see the car.
So instead, the demon only opens doors that don't have a vehicle behind them. You only ever see goats. At this point, he's not opening doors at random. If he were, you'd see the car 98% of the time, but you never do. At this point, since he's using additional information, it is in your best interest to switch.
I've never been happy with that explanation. I don't get why the host would not just open a single door, that's what the host does in the other scenario to me.
Sure, it could be that the host only opens one door, or it could be that he opens all but one door. In every case, however, it is better to switch. The all-but-one example is hyperbolic but still follows precisely the same mathematical rules. Your interpretation is valid, but so is the all-but-one example, and they all lead to the same result, it's just more obvious when you open nearly all the doors.
The best way that I've thought about it is like this:
You pick a door, then Monty let's you switch to the two remaining doors and if the car is behind either of them you win.
Obviously choosing the two remaining doors is better.
The trick is to realize that Monty showing you the contents of one door and letting you choose the other one is identical to Monty letting you choose both the remaining doors.
I think the great advantage of "simulation", for the programming-literate, is not that you can simulate your way to a correct answer, but that the process of creating a simulation is likely to show you the error in your reasoning.
As a young teenager, I encountered the Monty Hall problem for the first time, and I didn't believe that the "analytical" answer was correct. I decided to simulate it by programming. In the 20 minutes it took me to write a simulation, I went from complete incomprehension to a full understanding of why I got the results I got. Programming a simulation of the problem forces you to write out the algorithmic significance of "Monty reveals one of the goats".
The Monty Hall problem is a fun one to code up, and yeah, there are otherwise smart people who refuse to believe it.
I coded it up in F# https://github.com/jackfoxy/LetsMakeADeal to convince one of the founders of a start-up I worked for. He just grunted and walked away. Pretty sure he still doesn't want to hear about Bayes' Theorem.
The Monty Hall problem is especially unintuitive if you've ever watched Let's Make a Deal, since the problem set up is oh so close to, but not exactly, the set up of the Big Deal in the show. It's too easy to conflate the rules of the show with the math problem, which will lead to confusion.
I think seeing the results of a simulation also elucidates the set up of the math problem vs reading a proof.
No, I don't think that detail makes it any easier. I know that but I still really can't accept the correctness of the Monty Hall strategy (I have to basically just take it on faith and stop trying to understand it). I was trying to put my finger on why, and I think it's this.
After Monty eliminates one of the three doors, then the prize is behind one of the two. If someone were to come in this point, with no prior knowledge whatsoever, their chance of picking the correct door at random is 1/2. And that is still true even if they pick the door which our contestant is being asked whether or not to switch from! This is a real mind fuck to try to accept, that the same state of what's behind each door leads to different odds of making a correct random choice, depending on when you make the choice.
I honestly don't think I'll ever be able to "get" the Monty Hall strategy. I think I get why it works (choosing to switch means you're going from a 1/3 probability to 1/2), but it makes no sense at all. It seems like even if you choose to stay on the same door, your probability is 1/2 (the same as if Joe came in off the street and chose the same door as you). Like I said, I just have to take it on faith.
The best probability estimate you can make is constrained by the information you have available. The new person showing up has less information than the existing constant, so it makes sense that their best estimate would be less precise. Similarly, if someone with x-ray vision walked up in the middle of the game, they could pick the car 100% of the time, because they have access to more information than either of the existing contestants.
Your last paragraph isn't correct though, By switching you go from a 1/3 probability to a 2/3 probability. Based on the information the original contestant has, switching gets the car 2/3 of the time.
I don't see how a new contestant has less information, though? They know that one of the two doors contains the prize, which is all the previous contestant knows either.
The crucial bit of information that the new contestant doesn't have is that there was a door that was ineligible to be eliminated (the door chosen by the original contestant).
If the game had different rules, it would work like you are imagining. Specifically, if Monty randomly eliminated one of the two doors, meaning there was a chance for Monty to reveal the prize instead of a goat. If Monty has the chance to eliminate the prize before giving the contestant a chance to switch, then switching does not give you an advantage.
But it didn't. Before, we knew that one of those two doors could contain either a prize or a goat. After, we know the same exact thing. No information was gained there.
Maybe this will help understand it intuitively. You have a choice between doors 1 2 3. You pick door 1. You know the odds of the car being in door 1 is 1/3. The odds of the car being in door 2 or door 3 are 2/3.
Monty opens door 3, showing a zonk. You knew there was a 2/3 chance of the car being in door 2 or 3, but now you know there's a 2/3 chance of the car being in door 2 (since you know it is not in door 3).
All this didn't change anything you know about door 1. It has the same 1/3 chance it started with. Probability is all about what you know in the moment.
The math involves understanding the rules, that Monty will never open the door you picked and will never open the door with the car behind it. This is why one can't look above and say "well, there is a 1/2 chance of the car being behind door 1 after door 3 was opened and there wasn't a car there". This would only be true mathematically if the door Monty opened was random, but we know the door Monty picks isn't random. In fact, the pool of doors that could be opened depends on your initial pick. Monty was never going to open door 1 (the door that you picked), even if it was a zonk & Monty was never going to open the door with the car, therefore one can't make that assertion.
I have fun trying to explain this problem. Let me see if I can give an explanation that will help you.
So lets say you have just picked a door in the beginning. You know you have a 1/3 chance of being right.
If I then tell you, "I will give you two options... you can either bet you are right, or bet that you are wrong"
You would obviously choose to bet you are wrong, correct? Because you know you only have a 1/3 chance of being right with your guess, which means you have a 2/3 chance of being wrong. The smart bet is that your original guess was wrong.
This is actually what is happening in the game if you think about it. You pick a door and it has 1/3 chance of being the right one; since we know Monty is only going to ever reveal a goat and never the prize, we don't even NEED Monty to reveal the door at this point - we know he is going to reveal a goat, no matter what. We don't even have to wait to see which door he reveals, since that isn't going to give us more information (it is going to be a goat, no matter what). So when he asks you if you want to switch doors, he isn't asking you to switch to ONE of the other two doors, he is asking if you want to switch to having BOTH other doors as your choice. Whether he reveals the goat before or after you choose to switch doesn't matter, because you know it will always be a goat.
If that is still not clear, lets just write out all the options:
There are three doors, A B C. One has a prize, the other two have goats. Let see what happens with your two options (switch or dont switch).
In our first example, you pick door A and you are going to switch.
1/3 of the time the prize is behind door A. If the prize is behind door A, and you switch, you lose. This is 1/3 of the time, and you lose for switching.
1/3 of the time the prize is behind door B. You picked door A, so Monty reveals door C. You switch to the remaining door (B) and you win.
1/3 of the time the prize is behind door C. You picked door A, so Monty reveals door B. You switch to the remaining door (C) and you win.
Add up all those choices, and 2 out of the 3 times you win.
Now lets imagine that we DON'T switch.
1/3 of the time the prize is behind door A. Monty reveals one of the other doors, but you don't switch. You win.
1/3 of the time the prize is behind door B. Monty reveals door C, but you don't switch from A. You lose.
1/3 of the time the prize is behind door C. Monty reveals door B, but you don't switch. You lose.
So in this not switching world, you win 1/3 of the time.
In summary, switching wins 2/3rds, not switching wins 1/3.
That does help - if nothing else, adding up the outcomes manually helps to demonstrate it in a way that's not really easy to grasp on first glance. It still is kind of a spooky result which makes no sense, but at least I feel a little better about the correctness.
The Monty Hall problem is small enough to enumerate all the out comes on paper. If you then test all the different scenarios (not many) and compare switching to not switching - switching produces a slight edge. So it can be done without a computer and not a great deal of effort. I am not a mathematician so it was the only way I could prove it to myself at the time.
You can demonstrate the Monty Hall problem solution analytically with Bayesian statistics using prior probabilities, no need to go all the way to Monte Carlo methods.
I'm curious. When those people see the simulation, do they then go back to the analysis and uncover their mistaken reasoning? Or do they just continue to reject the analysis but begrudgingly accept the outcome of the simulation? The analysis of the Monty Hall problem is so very simple I find it very odd to staunchly reject it but then be persuaded by the simulation.
Years ago, when the Monty Hall problem was not well known, I've seen with my own eyes that it was hard to convince some very smart people of the correct answer. Indeed, after being convinced through simulation or exhaustive enumeration of the decision tree or some other way, they would go back and see what went wrong with their initial analysis.
This was how I built an intuition into the Monty Hall problem as well! Wrote a little app that simulated it a decade or so ago when I was discussing with friends!
Something that makes this a lot more intuitive is to increase the number of doors. If there's 100 doors, the host will open all remaining doors except 1, and they will never open the door with the car behind it, then one has a 1% chance of winning the car if they don't switch doors and it would happen only because they initially chose the door with the car.
While I agree we should leave the correct answer to simulation, analytical approximations are often surprisingly close and have the benefit of being intuition-building.
You learn a lot more from generating an analytical solution than a simulation, so it's usually worth at least taking a stab at it analytically before jumping to monte carlo methods.
Ironically, this reminds me of a story (folk tale?) about Von Neumann himself.
A colleague told him about the Two Trains Problem (https://mathworld.wolfram.com/TwoTrainsPuzzle.html), and Von Neumann replied with the correct answer. When his colleague said, "Ah! You figured out he trick!", Von Neumann replied, "What trick? I just summed up the distances in my head!"
Math is often much more fun and compelling for some people when you both theoretically prove something works and then also convince yourself of the same thing via a numerical experiment. I’m pretty good at math proofs (pure math PhD, wrote some books and papers), but I still love to do numerical experiments. It’s fun, and you also set yourself up to be able to easily ask different questions that may be very hard to answer theoretically.
Some of you might have just suffered from poor math education. I don't believe anyone capable of learning to program competently lacks the cognitive horsepower to do math competently with more or less equivalent ease. Many do however lack the training.
Part of the problem is that the basic statistical model simply neglects to differentiate between observing and doing, which changes the odds. This is very important when trying to reason about causality. When you observe an association like your thermometer shows a high number when it's warm out it's one thing, but when you set your thermometer to a high number you won't get any warmer. Whereas if you warm the room, your thermometer will rise. This symmetry breaking is captured by something called do calculus.
> Probability of two heads: pp
> Probability of two tails: (1-p)(1-p)
> Probability of head followed by tails: p(1-p)
> Probability of tails followed by heads: (1-p)p
>
> It's not difficult to notice that if you remove the first two, the last two form a 50/50 distribution
Very nice way to illustrate why throwing out the duplicate sequences gets back to a 50/50 distribution.
Because that's how science is done, theory then practical?
Maybe it's just a meat space thing, but even if the math gives us the answer it still only feels "final" or "true" to me when we've actually tested something out.
That's amazing, but I guess it won't help when the person can choose the bias?
Because according to the study the person can choose the bias by choosing which side start up.
So if the person wants tails based on what you've said, they should always
1. Do the first throw starting tails up.
2. If the first one is tails, then they now want to start second one heads up.
3. If the first one is heads, they will want to try and get heads again to dismiss the results. So they will do heads up.
So assuming for example that they have an ability to control bias 75% vs 25%.
Then there would be 75% chance of getting first as tails. After that 75% chance of getting heads.
So they will have 56.25% chance of getting it right the first 2 rounds.
The worst case for them would be if they get heads first (25% chance), and then are unable to get heads again. Which would be another 25% chance so 6.25% odds to lose with the first round.
So 56.25% chance of winning the first round of 2, or 6.25% losing and 37.5% of having to try again.
And I think the odds would converge at somewhere around 90% to 10%. I didn't do full calculations here, but overall it seems this strategy would increase the bias even more.
> That's amazing, but I guess it won't help when the person can choose the bias?
Alice writes on a piece of paper whether to use the result from the first or the second coin, Bob flips the coins however he likes, then once there are two different sides of the coins up, Alice turns over the paper and reveals to Bob which coin contains the result.
Though I guess that unnecessarily complicates the procedure – maybe Alice can just write "heads" or "tails" on a note and then Bob flips without having seen the note. It essentially replaces the second coin with Alice's mind which hopefully doesn't suffer from the same known bias.
If you're going to go that way you can skip the coin flip entirely. Just get both of them to write heads or tails on a note and then compare. This technique is used in some crypto projects, except instead of writing on a note you share cryptographic commitments.
But they need to remove the possibility of a psychological guessing game. E.g. Bob could've researched before hand that people are 55% likely to pick heads if they can pick by themselves.
That doesn’t remove the possibility of a psychological guessing game, just makes it more convoluted. If Bob knows Alice will pick first, he can still bias the results.
At this point you can just play odds and evens: one person picks odd, the other picks even, they both hold up either one or two fingers behind their back, reveal them at the same time, then sum the result. This prevents the randomness from being in any one actor's control. If you're worried that your brain's RNG can be gamed, then put an odd-denominated coin in one hand and an even-denominated coin in another, and mix them up so that even you don't know which hand has which.
I can feel the difference between denominations of my local coins no problem. What you need are a pair of coins with an odd year imprint and an even year imprint.
We still need a study then to confirm that when people try to mix the coins in their hands like that, it would be random enough. And that would take another year...
Suppose Alice needs to take the coin first to herself, to use the aforementioned strategy without intentionally introducing bias, and then using result of that, which would determine whether the first or the second result from Bob would be used. Because otherwise Bob may be able to make psychological "guesses".
There is skill to coin flipping. You'd need to blind the flipper, either physically blindfold or make it so they don't know which result is the positive outcome ahead of time.
I was imagining spinning the coin with a flick of the finger. That doesn't seem to be gameable to me, but I supposed you'd need to do a lot of flicks to see if flicking the head side or tails side matters. I'd think there's no way a coin can be more likely to spin an odd or even number of times before falling, but weirder things have happened.
Yeah, thought so as well, interesting how easily those numbers worked out, but then again it's because 75/25=3 and 3x3 = 9 so the final difference must be 9x between the probabilities or 100 / (9 + 1).
I was still lucky with the numbers as for example with 80% vs 20% it would've been 4x4=16 and so 1 to 16 comes to 100 / 17.
That's really cool. The key insight is that "The reason this process produces a fair result is that the probability of getting heads and then tails must be the same as the probability of getting tails and then heads, as the coin is not changing its bias between flips and the two flips are independent." So you have to make sure you always start with the same side up.
I love the math/stats history around gambling-related things, thanks for mentioning this. This method assumes the flipper can't introduce bias. ET Jaynes in his book Probability Theory also mentions that it is easy to learn to flip a fair coin in such a way that the result can be predetermined. I searched a tiny bit for this but couldn't find what he was referring to though.
I’ve heard that, with many hours of practice, dedicated amateurs and many famous magicians are able to do this kind of thing. I wouldn’t call it sleight of hand, but it is similar, although it may fall under that category broadly. I’m not a domain expert but I was taught some simple coin tricks as a child by my artist mom’s artist friend who ran the local frame shop. I never tried or thought to try to favor the coin flip or introduce bias, but it’s definitely a skill that can be acquired.
I remember at one point in my childhood learning a trick where it looks like you're flipping the coin, but you're really only causing it to rotate and wobble, meaning it's guaranteed to land on whatever side faced up as you tossed it. I don't remember how I did it though, and a few minutes of trying to recreate the effect has failed.
Lots of practice, use your thumb to hit the edge of the coin to give you more control, aim for a specific spot so you use the same amount of force and control the amount of times it flips. You can also use a surface that absorbs more so the coin is less likely to flip after hitting it.
The VN debiaser is very simple but it's not very efficient-- it loses a lot of your randomness.
Under the same IID assumption you can take N flips that returned M heads and map them to the N choose M possible ways that could have happened. The result will (under IID assumption, even in the presence of bias) be a uniform number on the range [0..N choose M). The ctz(N choose M) trailing bits can be used directly (as they will be uniform) but the rest would have to be converted to binary via something like an arithmetic coder or rejection sampling.
The result is muuch more efficient.
Less directly, VN debiasers can also be stacked. Each debiaser outputs three streams: the normal one, one that says if the normal one output anything, and one that says if it got HH or TT. Then run VN debiasers on those. Though it takes a fairly large tree to extract most of the entropy.
Also interestingly, this extends beyond a two-sided coin, to any number of possible results, like a die with N sides.
To get a fair result from a biased dN: Roll it N times. If you don't get all N distinct results, restart. If you do, then the first of those is your final result.
Also-related are all the problems like "simulate an 11-sided die with a 6-sided die", where the solution involves identifying certain rolls that should trigger a retry. [0]
In both cases it has to do with shaping the combination of multiple rolls using some knowledge of outcome-symmetries and overlaps.
____
[0] In this particular case, that's something like: Roll the die twice to get rolls A and B; map those a number X in the range range [1,36] using x=(((A-1)*6)+B); if X<=33 then return (x%11)+1 which will be equally likely in the range [1,11]; if X>33 then start over.
This is very interesting, but it assumes a coin is biased the same way every flip.
If a coin is more likely to land on the side it starts on, then the bias can change between flips. To fix this, we just need to make sure the coin starts the same side up before every flip.
The same principle is used in embedded systems as the base of a prng when obtaining randomness from a possibly biased or imperfect source of noise (eg adc low bits or uninit memory values).
I find it easier to understand if you say “instead of using the level/value as the source of randomness, use the transition from one level/state to the other as the bit of entropy.” (edge-based instead of level-based) I.E. instead of head is 0 and tails is 1, head to tail is 0 and tails to head is 1, and the other transitions are disregarded.
I'm confused, how does this help? If coins are biased to land same-side up, then don't I always have an advantage by guessing whatever side is up before the first throw?
I see what you're getting at, and it's subtle! To simplify the discussion, let's assume we always start out with heads up.
You're right that the first coin is more likely to end up heads. But so is the second coin, and if both occur, that would invalidate the pair of tosses. Now, imagine you guessed tails despite the coin starting on heads. If the first toss lands tails, the second coin is still more likely to land heads, which keeps the pair valid.
In other words, whatever you gain by guessing the side that's up on the first coin, you lose on account of the second coin having that same higher probability of invalidating the pair.
----
Using extreme numbers, in case that makes it more clear: imagine a coin that has a 99 % probability of ending up with the same side we start with, and – for simplicity of exposition – we always start with heads facing up before the toss.
If you guess heads, and the first coin lands heads, then there is a 1 % chance that you win, namely that when the second coin lands tails.
If you guess tails, and the first coin lands tails, then there is a 99 % chance that you win, namely that when the second coin lands heads.
The two outcomes of the first coin (99 % and 1 % respectively) perfectly balance out the two valid outcomes of the second coin.
Basically rejection sampling, which is about obtaining variates from one distribution using variates from another. We have samples from p-coins (where p is the probability of flipping heads) and we wish to generate samples from 0.5-coins.
That is only guaranteed to work if subsequent tosses are independent of each other. TFA suggests that they are not.
[edit]
If you always start with the same side of the coin face-up, then the tosses will be independent of each other, but if you e.g. always flip it once or always keep it the same before the next toss, then they are not.
Hah, realized that after I posted and edited while you were commenting. It's a good point. My original assumption would you'd just pick it up and flip it; not attempt to put the same side face up each time.
Couldn't you still game that by choosing the starting side? If you want heads, and you start on heads and get a tails for the first round, you could start on tails on the second round to try to get another tails and reset to step 1.
I take it a double headed coin doesn't count as biased for this algorithm. (because it's too biased to still be considered a "coin", for probability's sake.)
Ah, but this assumes a fixed unfairness; with this result, you could pretend to do the von neumann method but change starting sides on each flip, giving a biased result.
If you always start with heads the method works out. The key is that the first and the second toss need to be independent so that HT and TH have the same probability. If you influence the second toss based on the first one it no longer works.
> 2. If you get the same result both times, goto 1
Well, yes. The wiki description basically states that you get to throw away results of coin tosses in some particular cases.
In that sense, it's not really any different from just making up the results of the coin tosses entirely. There's 10000 different ways to make your data garbage.
With all due respect to Von Neumann, intuitively I would change it to use the information in the two coins: one for (X, Y) and another for (Y, X). Not the first.
Yes, and as the second coin carries no information (because we are focusing now on sets of two different consecutive outcomes) both your and JvN's protocols are equivalent.
About a year ago, we embarked on a quest to answer one of the most intriguing questions:
If you flip a fair coin and catch it in hand, what's the probability it lands on the same side it started?
Today, we are finally ready to share the results. Thanks to my friends, collaborators, and even strangers from the internet, we collected flippin 350,757 coin flips. We ran several "Coin Tossing Marathons" (e.g., https://youtu.be/3xNg51mv-fk?si=o2E3hKa-ReXodOmc) and spent countless hours flipping coins.
In short, we found overwhelming evidence for a "same-side" bias predicted by Diaconis, Holmes, and Montgomery 2007: If you start heads-up, the coin is more likely to land heads-up and vice versa. How large is the bias? In our sample, the mean estimate is 50.8%, CI [50.6%, 50.9%].
We also found considerable variance in the same-side bias between our 48 tossers. The bias varied with a standard deviation of 1.6%, CI [1.2%, 2.0%], in our sample. The variation could be explained by a different degree of "wobbliness" between our tossers.
If you bet a dollar on the outcome of a coin toss 1000 times, knowing the starting position of the coin toss would earn you 19$ on average. This is more than the casino advantage for 6-deck blackjack against an optimal player (5$) but less than that for single-zero roulette (27$).
>If you bet a dollar on the outcome of a coin toss 1000 times, knowing the starting position of the coin toss would earn you 19$ on average. This is more than the casino advantage for 6-deck blackjack against an optimal player (5$) but less than that for single-zero roulette (27$).
This sounds like the plot of a western where a man travels from town to town and gleans a little cash from the local waterhole a little every time. I did the math though, in order to get just the $19, assuming you played a modest 20 times a day, it'd take 10 weeks (not including weekends), and by that point people would definitely figure out your trick. In order to make any profit quickly, you'd have to distribute the strategy, after which your secret would explicitly be out there. Even assuming perfectly honest colleagues, having that many parallel people using the same strategy in the open means that before you turn any real profit, people will find out. It's a fun idea to fantasize about though.
Anyway, cheers on the paper! Pretty cool result that you guys put the effort in in implementing.
The upshot is that as long as you only stake $1 at a time, you're unlikely to lose more than $50.
On the other hand, /if/ you do, you'll have to play for 6000 more flips until you can be fairly certain that you're even again.
What's worse is if, after having lost $50, you're down to your last $50, there's almost a 1/5 chance you'll blow all of it trying to recover if you wager $1 each time.
If you grow wise and start Kelly betting you'll get back to your starting $100 on average in 5000 flips, though. If you can take out a loan of $500 first, you can Kelly bet your way to even much faster, in an expected 700 flips. Whether this is worth the interest on the loan depends on how quickly you can find challengers to bet with.
Make a deal with all the banks or systems that can print money that would allow you to take an infinite loan from them. Then just double the bet every time you lose.
If they can print money, why not infinitely as you will always pay it back anyway, so you don't have to worry about introducing inflation. There will always be a point when you can just burn the money that you temporarily introduced.
...from where does the money come that you pay back? Even if you have an unlimited stake, your winnings will be constrained by the counterparty eventually.
Right, I forgot, you also need someone willing to take those bets. So I think what you should do is make bets against multiple casinos/institutions where you can develop an algorithm that will find you an optimal method of betting for reasonable 50/50 results, if it makes easier to think. So for example at some point you might want to go to a casino and put the max bet on a single number in roulette, but do it enough times that you would have 50% odds of winning.
Once that is exhausted, you would have to become more creative, like trying powerball enough times, but I'm not sure how good the odds are there vs the reward. Maybe that wouldn't ever work.
Actually, I forgot. You can just play with highly leveraged options. It's not infinite yet, but come back to me until you've multiplied enough times that even options are not enough.
Forget everything I said before, just play with options and automated algorithm to buy more. And post here once you can't buy any higher cost options, and we'll figure something out together.
> in order to get just the $19, assuming you played a modest 20 times a day, it'd take 10 weeks (not including weekends)
What if one play session consisted of 10 coin tosses (each an independent $1 bet)? I guess 20 games like this per day would still be doable. Would that mean $19 per week?
These are fairly gentle coin tosses; barely going a foot into the air!
When I think of a coin toss, I think high and spinning fast (like the ones before sports games, where the coin goes into the air and lands on the ground, usually rolls a short way, and is collected on whatever side it landed). I would guess the 50.8% same-side bias would be much closer to 50% if the coins were tossed this way in the experiment.
There was indeed a lot of variation in the height of the tosses. I however disagree with the conclussion: two of my friends at the video had the most different height of tosses (one tossed thrice as hight as the other one), yet both of them had exactly the same bias (0.505).
The amount of spin is unfortunatelly very misleading from the 30fps videos--the coins often seem like not spinning at all but that's just a result of the poor video quality.
How do you control for bias coming from the same coin flipper? Do they usually flip their coin from the same starting height (whatever comfortable arm positioning they have, which I assume would also introduce bias by how they catch it as well) and to the same arc peak height? Or were they encouraged to try a different body position, strength and angle of launch for each flip?
With a 1 foot to 2 feet toss, landing in the hand, I can get the same side as the starting side more than 90% of the time, without even trying. I wouldn't trust such a toss to be fair. Landing on the floor would change the game.
"In each sequence, people randomly (or according to an algorithm) selected a starting position (heads-up or tails-up) of the first coin flip, flipped the coin, caught it in their hand, recorded the landing position of the coin"
Presumably if you instead allow the coin to land and bounce on a hard surface, the bias would disappear?
> The standard model of coin flipping was extended by Persi Diaconis [12] who proposed that when people flip a ordinary coin, they introduce a small degree of ‘precession’ or wobble—a change in the direction of the axis of rotation throughout the coin’s trajectory. According to the Diaconis model, precession causes the coin to spend more time in the air with the initial side facing up. Consequently, the coin has a higher chance of landing on the same side as it started (i.e., ‘same-side bias’).
[12] Diaconis P, Holmes S, Montgomery R. Dynamical bias in the coin toss. SIAM Review 2007; 49(2): 211–235.
After reading this the first thought I had was how do you stop people flipping the same way? Like, give me a baton and I could throw it at varying heights and control which side I caught it on. In theory the same applies to coin flipping. You can get quite consistent with your positioning and power.
You could probably control for it by making people alternate which side was face up before the flip.
> Two-up is a traditional Australian gambling game, involving a designated "spinner" throwing two coins, usually Australian pennies, into the air. Players bet on whether the coins will both fall with heads (obverse) up, both with tails (reverse) up, or with a head and one a tail (known as "Ewan"). The game is traditionally played in pubs and clubs throughout Australia on Anzac Day, in part to mark a shared experience with diggers (soldiers).
Two-up sounds pretty fun. Your comment in turn made me think of Chō-han, which is somewhat similar but involves rolling dice instead of flipping coins.
https://en.m.wikipedia.org/wiki/Ch%C5%8D-han
It’s wild and raucous and usually takes place outside pubs and local workers’ clubs, lawn bowl clubs etc. The spinner puts the coins on a flat stick made for the game, but basically a popsicle stick but wider like a tongue depressor. They toss the coins up and flick the stick to tumble the coins. If you called the two matching coins correctly, you double your bet in winnings. Each game takes like 30s-1m and they go on from mid-morning til early afternoon ish.
The losers indirectly pay the winners based on your call of two heads or two tails and if there’s a split, the house wins that game.
Most of the time is spent drinking your beverage of choice, yelling and cursing your own luck and talking smack to the coins, the spinner being booed or cheered for a well run game or a bad string of luck, but the bets are typically fairly low in my experience, although it’s up to each individual player what they bet, but the spinner or venue sets the bet amount per section or per spinner. Each spinner usually takes a set amount, like $5/$10/$20/$50, even $100 in some cases. It’s all cash and pretty much the honor system in that they aren’t handing out receipts or tickets with your bet, so that sets an upper limit on how many bets each spinner can keep straight. No one wants to see someone lose their shirt, so most folks play against their friends/mates for fun, and ultimately you’re all playing your own game because you decide your bet and it’s actually fairly unpredictable due to the crowds milling around the spinners, of which there will be many, all taking bets and running games independently and simultaneously, and players can place bets in any or all of the games around them if they want.
I would guess it's rather mathematical. Each coinflip has some number of half-flips. Now analyze the distribution of that number. If this distribution were to start at its maximum with 0 half-flips and decay as it increases, summing over the even values (same side up) clearly gives more than summing over the odd values. Now the distribution isn't going to be like that, but I expect that it's generally "front-loaded" in a way that causes a similar effect.
Yeah, and seeing that the bias occurs only in some people, perhaps it occurs in people who do as little rotation as possible. Not sure if this study has a graph including amount of rotations occurred in general. E.g. you could take all coin flips where 0-5 rotations occurred and compare them to 6-11 rotations.
> This is more than the casino advantage for 6-deck blackjack against an optimal player (5$)
I have seen that figure (roughly 0.5% edge) but that has to depend on how deep the shoe is dealt? I remember playing only the last hands with dealers playing down to between 1.0 and 0.5 decks left. That meant you could play hands where you knew almost all remaining cards were suited. I guess the average edge assumes constant bet and doesn't include betting strategies based on counting at all? (And those strategies obviously wouldn't work in any real casino because it's "frowned upon").
This was my intuition in childhood. If you choose tails to be yours and start with tails then catch it, it is most likely to be tails. I came up with this observation myself. Weird.
I know it's not on par with any real stats, but still... I just had this strong conviction that worked this way.
The really interesting stuff is why it is so. I would think along lines of brain timing tossing and catching, eye-brain-timing rather than gravity and coin itself. Added: Yeah, now I remember actually manipulating timing of catch to achieve this.
> Trying not to be disrespectful
No worries, it is just me using high context communication style, where I assume that you know that I know this and I just share what was my experience in childhood (it was not like a single thought).
Noticed as a kid I could flip a quarter with a certain consistency, so I experimented a bit and quickly got to be >90% accurate with an ordinary (controlled) flip.
Pretty simple. In fact I just picked up a quarter and practiced (20+ years out of practice) and have some observations:
1) harder than when I was a kid, my fingers are lot bigger + stronger so it's not as precise from the start. A bigger and heavier coin would help.
2) the timing factor is bigger than I recalled.. essentially you can watch the coin flipping and get a subconscious/automatic/predictable sort of count/feedback to it. You can bring your hand up to the coin in the air at a precise moment pretty easily and "tell" (>90% accuracy today of the flips I just did that I considered successful before looking at the result) if the flip was predictable. Hand eye coordination, spatial awareness is very correlated to this skill, I suppose.
3) it really is the same side that comes up.. again I think because of the automatic watching/count/completion of full rotations, i.e. catching the coin at the end of a full rotation instead of a partial.
Came in handy occasionally.. if I knew I was going to be wrong (other person usually waits to call mid-flip) I could catch the coin a little lower to give myself a chance, or punk them by not putting it on the back of my hand as is more standard (they might demand a re-flip.. kind of like if you are playing rock paper scissors and one person goes on 3 and the other on 4).
This. When I tried to comment briefly yesterday that the coin falls on the same side, only then did I remember that I actually did that and that it has nothing to do with physics but rather neuroscience (and innocently bent morality - which was also the object of my internal observations). then I remembered that I had actually considered different coin sizes, but I was never as thorough in my attempts to bend the results as you were. oh... the playgrounds of childhoods...
I think you can argue that the experiment wasn't representative of 'normal' coin flips.
On average, each flipper in that experiment flipped a coin over 7000 times, after that amount many people will have learned to flip in a comfortable way with less variance between physical action and force they use. I'd imagine that in that case the coin would more likely land with the same orientation.
I don't think this would be true if someone flipped without practice.
I personally did 20,100 flips and I can assure you I have no clue how to control the flip. I centrally got much better at flipping and catching the coin in hand without dropping it---which takes some practice on its own.
(I know that there are techniques for adding the wobble to the toss, but I didn't study them and I have no clue how to do them. I think it is safe to say you don't discover them intuitevelly.)
I think this is a great point. One flipper 7000 times is quite different than 7000 flippers one time, if the aim is to see whether there is an underlying bias.
I think you'd need more than practice. Most people would need a teacher. Just doing something over and over isn't automatically going to make you better. E.g. The old "10000 hours of practice makes you an expert" rule assumes deliberate practice. And even then, it's incorrect.
I'm still looking for an intuitive or ELI5 explanation of the mechanism for this bias.
The original paper says:
The standard model of coin flipping was extended by Persi Diaconis who proposed that when people flip a ordinary coin, they introduce a small degree of precession’ or wobble—a change in the direction of the axis of rotation throughout the coin’s trajectory. According to the Diaconis model, precession causes the coin to spend more time in the air with the initial side facing up. Consequently, the coin has a higher chance of landing on the same side as it started.
Another coin toss experiment[1] site says this:
The basic reason is that, instead of rotating around a horizontal axis as one might imagine, a typical tossed coin is rotating around a tilted axis which is precessing in 3-space, and this entails a certain degree of "memory" of the initial parameters.
The Diaconis paper[2] has the definitive explanation but it's hardly intuitive. I got a feel for why it is, but I can't do an ELI5. The best I'm able to write is this: A human being is likely to introduce some precession in the coin toss. If there is precession, then the angular momentum vector is going to spend more time in the heads direction if starting from heads, and that accounts for the bias.
What I think would work well for an ELI5 is an animation of a coin toss showing the angular momentum vector sweeping out a region during its flight, and showing it spends slightly more time pointing toward heads.
Perhaps it helps to imagine someone had a "screwy thumb" and the coin only precesses when they "flip" it (in fact people can train themselves to do this, and its very difficult for you, the sucker, to see in the air that the coin is not rotating but just precessing!). Hopefully its obvious that whatever side is initially facing up will be the same one facing up when its caught?
The next step is not at all intuitive to me, namely that even someone trying to do a fair flip causes some precession, and that this isn't decoupled from the rotation.
I always tell people that result of coin flip is highly start state dependent.
Imagine a sequence of H(ead), T(ail), H, T, H, T, ... if the sequence starts with H first, in no way can the number of T exceed that of H, but the number of H might be 1 greater that that of T. I never tested my self, but I hypothesize that the propability will be more skewed if the number of revolutions is less, i.e. having a shorter Head-Tail sequence.
Edit: The sequence was meant to represent the sequence of head and tail facing up during the rotation. A sequence of [H, T] denotes one full rotation, starting with head. [T, H, T, H] denotes two full rotation, starting with a tail, ends with a head. I didn't mean the result of a flip. So the result of a flip is the final element of the sequence.
Of course that is true, if you wait not significantly long enough.
Imagine the situation that you flip the coin (starting at H) and you grab it in air immediately. Of course, you will get H as result.
But let's say, the time to stop the coin can be a relatively long time T.
Then, I think the probability is some kind of sum. Let's choose \Delta T= 10ms as time discretization:
For T -> \infty P(H) and P(T) getting more similar.
But, in practice you wouldn't wait equally distributed in time but more like a Gaussian distributed time period. Hence, each term of the sum would get weighted differently.
And the variance and the offset of the Gaussian distribution can shift the probability in favor of H or T. It's really dependent of the concrete parameters. If you grab always after 35ms, then you'll always get T for example.
If I were to dig through my comment history I would find I have already responded to this exact sort of comment before, so let me regurgitate :)
If the coin is resting on your hand waiting to be flipped, it is currently mid-way through being on side up. This is because the switch between being e.g. heads up to tails up is done when the coin is vertical. If it isn't a clear explanation, try imagining catching the coin and "flattening" it at different angles, while 50% of angles will match either side, at the moment the coin is flipped, it is already half-way through the angles representing the current side.
This means that the correct sequence you describe it not THTHTH but rather THHTTHHTTHH. Taken at even intervals, both sides will appear the same number of times. Taken at odd intervals, at half of the intervals there are more Ts and the other half have more Hs.
Can you toss a coin without it flipping at least once (changing state)? I find it quite unlikely to happen, so if the starting state was H, your sequence will be THTH...
I can't quite define what counts as start of sequence. Maybe a sequence should always have at least one element, and toss straight up is allowed. But if some flipping is mandatory, then the start of sequence would mean the other side of the coin. All I could deduce before this paper is the probabilty of coin flip is skewed.
Huh. I thought everyone did it my way. Flip it onto the floor, hunt it down, and see which side came up.
I'm mostly serious. I know that's not ideal. But I've never been able to master the art of flipping a coin onto the back of my hand, and even catching it mid-air is hit-or-miss for me. The vast majority of time I or anyone I'm with has flipped a coin it's ended up on the floor.
> I've never been able to master the art of flipping a coin onto the back of my hand
That's not the usual way of doing it. Usually, you catch it in the palm of your throwing hand, and only then put it on the back of your other hand, flipping it once more in the process.
Based on the paper it looks like 55% chance that it will land on the same side it started is possible. This was the most extreme subject.
The bias is caused by procession so I want my flip to process as much as possible. Maybe I offset my finger as far away from the center of the coin as possible. Also putting as much force into it as possible is probably a good idea.
Finally I have to catch it in a way that the top side is facing up when I reveal it.
Give it some spin with your index finger.
Imagine putting a coin between your index finger and thumb, heads side up, resting on your middle finger below — not too different than how most people start a coin flip. With your index finger, rotate the coin in its plane, so that it stays “heads up” but the head is rotating.
Now try doing this and simultaneously flipping the coin by flicking it with your thumb at a point on the bottom, close to the edge. If you do it right, you’ll impart a spin.
If you impart a modest spin, the coin will never actually flip over, but will just wobble, and will therefore land heads up. An observer will likely not know what you did, because it is hard for the eye to tell the difference between a flip and a wobble at high speeds.
Interestint, my intuition was inverse to the result: landing on the opposite side takes an odd number of flips and on the same side it would be an even number of flips.
Since for every number of flips either the number of odds is the same as the number of evens, or higher by 1, the chances of getting an odd number of flips is higher. There seems to be more at work here, and only testing validates a model!
I would like to see how TianqiP flips the coin. This user have a ratio of 0.601 [0.582, 0.619] heads, which is a lot. This is the type of skill that can get you a couple of bucks if played strategically.
They actually flipped a coin >350,000 times and the paper has 49 authors. Based on the paper https://arxiv.org/pdf/2310.04153.pdf, they recorded video footage of each coin flip
I thought they would have run a computer simulation instead.
Is there a minimum number of rotations per coin flip to consider it valid?
If looks like the bias was not evenly distributed across people. How did you protect your experiment from skilled bad actors who could influence the data with a few bad/skilled flips? Did strangers on the internet fare any differently than in-person attempts from trusted people?
We told people that the coin has to flip at least once (which would bias it for the opposite site). Whenever instructing people, I tried to explaining that the coin flip should look like you were trying to determine an outcome of a bet.
You can find the complete experimental protocol here: https://osf.io/hkv8p
Also, I wish I had (any) budget to hire proffesional skilled tossers haha.
I assumed that the 1% bias was entirely due to coins that did not undergo any rotation at all. However, reading that you told people that the coin has to flip at least once, I think I assumed wrongly. It sounds like the bias is due to coins that have undergone an integral number of 360-degree rotations (not zero rotations). But what exactly is the physical mechanism causing this bias? It's easy to understand why zero rotations would introduce a bias, but I can't easily picture a reason for a bias toward an integral number of 360-degree rotations. Is there a simple and intuitive way you can explain the physical reason?
It's not about the number of rotations at all. I doubt that you can control it at all even after dozens of hours coin flipping (I did more than 20h and I can't eveb guess how many rotations the coin made) Diaconis, Holmes, and Montgomery (2007) proposed a physical model of coin flipping that introduces the bias as a result of wobblines (i.e., off-axis rotation in the flips).
Given access to repeated uses of a coin of unknown bias "p" (which is not 0 or 1) you can (eventually) always generate a new coin flip with bias given (exactly) by:
Can someone who has access to a precision robot and controlled environment please check; if one applies the same force on the coin in flipping, with it landing on a (soft) surface at the same height it - will it land the same side every time.
Your comment sniped me into thinking of some over-engineered systems to measure this with the human factor included. Some sort of RFID coin balanced on all axis with an IMU chip that measures the force applied to the force (through acceleration) and the mode and number of rotations of the coin. Maybe a computer vision solution would also work
However, I was unaware that they already conducted said experiment. I am now left confused as to why they would not mention such in the abstract, and refer only to natural experiments and second order measurements.
Based my back-testing of stock market, I found a similar conclusion: if a stock rise yesterday, then today the probability of raise > the probability of fall. P(raise) is about 50.1%, P(fall) is about 49.9%. Vice versa.
Having this theory means you can't rely on a single bet, you have to bets many many times to make profit from stock market. Even though I knew that, I am still working as a developer, I wish one day I have enough money to start stock career.
Alert: It's not going to be as simple as this to make money even with large numbers. So don't fret about not having the money and missing some golden opportunity.
There are enough actors out there trying to develop complex algorithms to find an edge, and so there wouldn't be any simple edges like this left anywhere as they would be arbitraged away.
If there is a simple pattern, it is noticed and traded until the pattern disappears.
Also, if you do find an edge, you need to be prepared for what happens when everyone else discovers your arbitrage opportunity, and/or you tap out the potential of it.
A number of failed finance companies have the story: find a legitimate arbitrage opportunity; take on a billion dollars in investment to exploit the opportunity; make bank; other people discover your arbitrage opportunity and jump on; stop making bank; try riskier and riskier investment opportunities to keep it going; engage in outright fraud to keep it going; go bankrupt and/or to jail.
Stock market is such a slippery addiction slope. With casino a reasonable person would know, the odds are stacked against themselves, but with Stock Market, there's no guarantee and it's really easy to convince yourself that you have an edge, and sometimes it works for a while, and then it doesn't, but you are already used to that feeling of reward, and as you said you will start to engage in riskier and riskier opportunities to get that feeling back.
Yeah, you could find a pattern that wins 99% of the time to yield 1% of what you risk, but you don't consider that there's always 1% odds of losing it all and it's priced in, it just hasn't happened to happen yet, but systems with more precise data have accounted for it.
E.g. something like selling 0dte options sufficiently out of money might seem like a free money hack, but once something unexpected happens you are down to just "who could have possibly foreseen that event to occur, my decision making was solid.".
> you have to bets many many times to make profit from stock market
You also have to take into account transaction fees and broker spread. (If you get a great deal on one of these, check the other very carefully!) I'd be quite surprised if the edge on your system is enough to cover those.
Supposedly dice tend to role higher if you roll them starting with the largest side up. I use that when I'm creating my character in D&D and it seems to work.
Then I guess we should include every study participant as an author across all disciplines. 200 participant study in psychology? 200 authors on the paper.
Not necessarily? The difference is that the people flipping the coins are not just experimental subjects, they're actually implementing the experimental protocol.
The kinds of participants you're talking about are not just left off the paper out of lack of interest. It's often the ethically preferable option. They often have a vested interest in remaining unnamed for privacy reasons, and derive no tangible benefit from being listed as authors.
A big author list isn't totally unheard of. The paper where they announced the discovery of the Higgs boson had an author list that spanned 8 densely-packaged pages.
I actually used to be really good at manipulating this as a kid. Basically, if you toss a large coin with a stiff arm, you can get it to flip exactly 1 and a half times before you catch it. I would always use this to win bets with my friends.
Could it have anything to do with contact with the hand? Moisture/oil from the hand making the “same side they started on” a tiny bit heavier than the other side? Or, and I’m no physicist, static electricity or something?
Very interesting! Is there reason to believe that the outcome would be different if the experiment was re-run but replacing the humans with coin-flipping machines?
I watched one of the 12-hour coin tossing marathons.
They were sitting with their laptops and pressed a button for every result.
I wonder if human error can explain (at least part of) the deviation from 50/50:
* locations of the buttons they pressed on the laptops (they only pressed once per toss before enter, meaning the button represented same-side or other-side)
* remembering what the coin started out as may be harder (or easier, but probably harder) when the result is other-side
* other??
Need to repeat this amount of tosses but with a higher degree of supervision to be sure of the result.
That's actually not completelly accurate. The study protocol (https://osf.io/hkv8p) describes the procedure in greater detail.
People were pressing one button for heads and another button for heads (which we deemend less error prone and less likely to be subcontiously influenced). The trick was that the next coin flip started the same side-up as the previous landed. Therefore there was no need to record the start (and we randomized the starting position of every 100th flip)
We also did some auditing of the video recordings (trying to decode the outcomes from the videos) and they showed quite consistent degree of bias as the original responses.
It's simpler: an ideal coin flip is simply assumed to be uniformly distributed, on the basis of there being two possible outcomes and no influence. Where the bias in reality comes from, doesn't matter.
This also happens to be the great divide between frequentists and Bayesians.
Even simpler than that, actually. There's no requirement for any distribution at all. (And I would argue strongly against a uniform prior, but that is a separate discussion.)
What's necessary to guess 50 % on the first toss is simply (a) complete ignorance about the bias, whatever it is, and (b) the hypothesis that the bias is just as likely to be negative as positive (i.e. a symmetric prior.)
Wait, didn't Ed Thorp argue for the exact opposite – if you have a super-person that can guarantee lack of bias, then you also have absolute Newtonian predictability on virtue of the mechanical perfectitude. The randomness must come from somewhere, and it comes from imperfections which also incidentally introduce bias.
Not neccessarily because spinning and bouncing coins are often much more biased then flipped coins. (Unequal weight distribution on the side can bias a spinned coin while it doesn not bias a flipped coin. There are a couple of studies on it too.)
1. Flip the coin twice
2. If you get the same result both times, goto 1
3. Now that you have different results for your pair of flips, use the first element of the pair of flips as your result.
https://en.wikipedia.org/wiki/Fair_coin#Fair_results_from_a_...