Run n times, compare results. The idea of the same bit flipping twice is fairly ludicrous. These n times could be on the same machine, or distributed and compared with paxos. Then back everything up a billion times. Safety is extremely hard to guarantee, so the closest approximations will be expensive.
I doubt that would cause an instance to fail entirely in a majority of instances. Linux has it's own memory corruption checking and fixing (on 64-bit words afaik), also the customer can code their own sanity checking or CRC data to detect and eliminate them as much as possible. Also, many file or copy operations have built in corruption checking as well. Of course, on top of that you have your virtualization platform which I'm sure has it's own series of sanity checks and complex hardware handling logic.
Ultimately, it's a business decision of risk-management. Also, if you're in any kind of serious business like banking, there is already regulations around the standards of hardware/software you can use.
Or your page cache:
https://blogs.oracle.com/ksplice/entry/attack_of_the_cosmic_...