Benchmarking papers are inaccurate when the original algorithms are not open sourced, and the grad student needs to rewrite the algorithm from scratch. They can easily create different implementation details, and wind up with an algorithm that's slower than the original.
I do think that the original algorithm authors should have the opportunity to correct the benchmarking code, or to release their original implementation as open source to be benchmarked.
In some sense, the benchmarking paper with a slower implementation is more "correct," since an engineer evaluating which algorithm to use is just as likely to implement the algorithm in a slower way than the original paper. The incentives are right too: the original paper author should be giving enough details to recreate their work, and the benchmarker is showing that really their published algorithm is slow.
I do think that the original algorithm authors should have the opportunity to correct the benchmarking code, or to release their original implementation as open source to be benchmarked.
In some sense, the benchmarking paper with a slower implementation is more "correct," since an engineer evaluating which algorithm to use is just as likely to implement the algorithm in a slower way than the original paper. The incentives are right too: the original paper author should be giving enough details to recreate their work, and the benchmarker is showing that really their published algorithm is slow.