Benchmarking is pretty meaningfree in the best cases. The JVM isn't any worse in this regard.
The JVM is slow in many regimes due to things like JIT warmup. Rather than accepting that many JVM people are hypersensitive: thou must only benchmark in the JVM's best case scenario.
> Benchmarking is pretty meaningfree in the best cases.
That's completely false. Benchmarking is a core staple of building & evaluating performance-sensitive libraries or other routines. It doesn't work (well) in non-deterministic languages which reduces its usefulness scope, but in things like C++ it's highly useful and reliable for evaluation of libraries and monitoring for regressions.
> The JVM is slow in many regimes due to things like JIT warmup. Rather than accepting that many JVM people are hypersensitive: thou must only benchmark in the JVM's best case scenario.
Well it's not just the JIT that's a problem. It's also things like GC passes. Does that get included in the results or not? Do you force GC passes between runs? How about finalizers? The answers to those questions depends on the state of the rest of the system and the expected workload, it's not something you can just trivially answer or even accommodate in a framework since most of the behavior is up to the particular implementation, which can then further vary based off of command line flags.
The JVM is slow in many regimes due to things like JIT warmup. Rather than accepting that many JVM people are hypersensitive: thou must only benchmark in the JVM's best case scenario.