The article uses an interesting way to note speed differences, using relative (-20%) changes rather than absolute (0.8) differences. I've not seen that before and I'm not sure I like it.
For example, something -99% is 100x as slow as the original, while something +100% is only twice as fast. I think especially the average is wrong. With the above example, the average would be 0%, or 'no change', which IMHO isn't fair.
Relative is better than absolute. If you have a benchmark like this:
for(i=0; i<100; i++){ do something }
Then it could just as well have been this:
for(i=0; i<10000; i++){ do something }
This change makes that benchmark 100x more important relative to the other benchmarks.
IOW, the relative importance of benchmarks in an average is arbitrary. If you measure speedups you give every benchmark the same weight.
The "average" he uses, the geometric mean, doesn't have the problem you describe. The geometric mean of a 99% slowdown and a 100% speedup is computed as sqrt(1/100 * 2) ~= 0.141421356. This means that the "average" slowdown is 1/0.141421356 ~= 7.07x.
For example, something -99% is 100x as slow as the original, while something +100% is only twice as fast. I think especially the average is wrong. With the above example, the average would be 0%, or 'no change', which IMHO isn't fair.