Agreed. I wrote a fuzzer at my last job and it found a bunch of bugs right before a release. Nobody knew what fuzzing was so I was attacked by the program owner for trying to break the software and given an insulting performance review for it. Then I had all the fuzzing results and coredumps deleted out of their directories by the program owner so the release looked immaculate. Defense software ftw
In such projects there is often a promised release date and a contract stipulating e.g. a total test pass rate of at least, say, 95% or something like that.
Now if you use a fuzzer to generate a lot of test cases that fail (if it only saves the failing ones) it will impair the ability to release.
So sadly there are often incentives to not fuzz close to a release.
The management euphemism is “taking a risk based testing approach”, when selectively removing test cases to make the numbers to qualify for a release.
Agreed. I wrote a fuzzer at my last job and it found a bunch of bugs right before a release. Nobody knew what fuzzing was so I was attacked by the program owner for trying to break the software and given an insulting performance review for it. Then I had all the fuzzing results and coredumps deleted out of their directories by the program owner so the release looked immaculate. Defense software ftw