Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not a fan of the set-seed solution. In the past, when I've tested PRNG implementations (Erlang used to not have the ability to have multiple, independently seeded RNGs), my approach was to decide on an acceptable rate of false negatives, and design my test around that. I figured I'd run the test suite no more than 10,000 times, and I wanted a 1/1,000,000 chance of ever seeing a false negative.

I can't remember the exact math I used at the time (I had to crack open a stats textbook), but ultimately it boiled down to generating a bunch of (seed, number of values previously pulled based on that seed, value) tuples, running a linear regression against them, and defining a maximum acceptable R^2 value based on my 10^-30 acceptable probability of a false fail.

When the RNG is not the thing being tested, mocking the RNG to do a sampled sweep through the RNG's output range is typically the correct move.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: