The difference between diamond native and diamond WASM demonstrates how, even with WASM, native implementations beat browsers hard, and native implementations performance-wise are still very worth, specially for lower powered devices, and, perhaps, reducing battery usage (as consequence of less CPU use) in mobile devices.
The wasm implementation here was still running under a JavaScript test harness, so I suspect it's the JS-WASM boundary interactions that are causing the slowdown. WASM itself (if it doesn't need to interact with JavaScript) usually runs with a much smaller performance penalty.
I suspect so too - given there are 280 000 calls across the JS-wasm boundary, and most of those calls pass a string. I'd love to know for sure though. I considered making this benchmark pass the whole data set in one go through JSON or something, but that felt like cheating - since thats not how the API would be used in practice during a real editing session.
But even paying that cost, it still seems much faster to use rust + WASM than run the same algorithm in native javascript. And the JS-wasm boundary will probably get gradually faster over the next few years.
Yes. Ultimately WASM is executing within a sandbox & involves being JIT compiled (read: not heavily optimized except for hot loops eventually). If native compilation is an option it makes sense to go that route
WASM competes with asm.js not asm (or, arguably, jvm etc)
WASM JIT implementations tend to be quite a bit different from JavaScript JIT, so that's not really where the perf difference comes from.
First, WASM gets all the heavy AOT optimizations from the middle end of the compiler producing it. At runtime, WASM JIT doesn't start from program source, but from something that's already been through inlining, constant propagation, common subexpression elimination, loop optimizations, dead code elimination, etc. And WASM is already typed, so the JIT doesn't have to bother with inline caching, collecting type feedback, or supporting deoptimization.
Because of that, the only really beneficial work left to do is from the back end (i.e. arch-specific) part of the compiler- basically, register allocation and instruction selection. WASM JIT compilers don't bother trying to find hot loops or functions before optimizing. Instead, they do a fast "streaming" or "baseline" codegen pass for fast startup, and then eagerly run a smarter tier over the whole module and hot-swap it in as soon as possible. (See e.g. https://hacks.mozilla.org/2018/01/making-webassembly-even-fa...)
The perf difference vs native rather comes from the sandboxing itself- memory access is bounds checked, support for threads and SIMD is limited (for now), talking to the browser has some overhead from crossing the boundary into JavaScript (though this overhead will go down over time as WASM evolves), etc.
The difference between diamond native and diamond WASM demonstrates how, even with WASM, native implementations beat browsers hard, and native implementations performance-wise are still very worth, specially for lower powered devices, and, perhaps, reducing battery usage (as consequence of less CPU use) in mobile devices.