The big problem I see is that if you ever want to do something which has to be precise, you'll have to have two processors. Perhaps in the future we'll see chips with a "probability drive" in addition to the normal cores.
Now we've just got to develop an improbability drive...
You might have to do it far more than 7 times to converge on a suitably reliable answer... but perhaps precise recomputation could be deferred to off-hours, or lagged batch processing, or after disputes, etc.
On mainframes where it really, really matters that the answers be right, three processors will do the same calculation in lockstep and any one processor deviating will have its results thrown out.
If I read the article right, it's not that it sometimes fails, rather it almost always fails, but the amount of error is predictable. You can converge on the exact value by repeating and averaging the calculation, but it would be simpler to have an exact co-processor.
Now we've just got to develop an improbability drive...