The problem with almost every language benchmarking blog post I see is that the author is generally an expert in at most one of the languages they are benchmarking, and so they write slow or non-idiomatic code in the other languages, leading to a useless comparison.
Here's one example that I picked up. The author writes that they had to refactor part of the Haskell code from (paraphrasing)
if (checkColl tr rsDone || checkBound tr)
then {- branch 1 -}
else {- branch 2 -}
into
if checkBound tr
then noFit
else if checkColl tr rsDone
then noFit
else {- branch 2 -}
where
noFit = {- branch 1 -}
because of "problems with lazy evaluation". But in fact the only problem is that the call to `checkBound` is fast whereas the call to `checkColl` is slow, and the || operator evaluates its left argument before deciding if it needs to evaluate its right argument (just like in C, and every other language I've ever used). So all that is required to get the speedup is to switch the order of the calls:
if (checkBound tr || checkColl tr rsDone)
then {- branch 1 -}
else {- branch 2 -}
I should have made it clearer; when I said "Lazy evaluation often tricked me up", I meant that I made the stupid mistake of thinking Haskell could somehow magically evaluate the shortest branch first via lazy evaluation. I.e. lazy evaluation tricked led me to make mistakes due to expecting too much from it, not because of any inherent fault in lazy evaluation.
Here's one example that I picked up. The author writes that they had to refactor part of the Haskell code from (paraphrasing)
into because of "problems with lazy evaluation". But in fact the only problem is that the call to `checkBound` is fast whereas the call to `checkColl` is slow, and the || operator evaluates its left argument before deciding if it needs to evaluate its right argument (just like in C, and every other language I've ever used). So all that is required to get the speedup is to switch the order of the calls: