The artifact's executions speed doesn't seem to be Cranelift's priority. They're instead focusing on compilation speed and security. Those are still useful in Rust for debug builds at least. That's when we need a quick turnaround time and as much verification as possible.
Sorry about binaryen.js - those JS/TS bindings could be a lot better, and better documented, but priorities are generally focused on improving optimizations in core Binaryen.
That is, most work in Binaryen is on improving wasm-opt which inputs wasm and outputs wasm, so any toolchain can use it (as opposed to just JS/TS).
But if someone had the time to improve the JS/TS bindings that would be great!
No, it's still behind a flag (and so transitively, exceptions are too, because we built exception objects on top of GC).
Our docs (https://docs.wasmtime.dev/stability-tiers.html) put GC at tier 2 with reason "production quality" and I believe the remaining concerns there are that we want to do a semi-space copying implementation rather than current DRC eventually. Nick could say more. But we're spec-compliant as-is and the question was whether we've implemented these features -- which we have :-)
The special part is the "signal handler trick" that is easy to use for 32-bit pointers. You reserve 4GB of memory - all that 32 bits can address - and mark everything above used memory as trapping. Then you can just do normal reads and writes, and the CPU hardware checks out of bounds.
With 64-bit pointers, you can't really reserve all the possible space a pointer might refer to. So you end up doing manual bounds checks.
Can't bounds checks be avoided in the vast majority of cases?
See my reply to nagisa above (https://news.ycombinator.com/item?id=45283102). It feels like by using trailing unmapped barrier/guard regions, one should be able to elide almost all bounds checks that occur in the program with a bit of compiler cleverness, and convert them into trap handlers instead.
Yeah, certainly compiler smarts can remove many bounds checks (in particular for small deltas, as you mention), hoist them, and so forth. Maybe even most of them in theory?
Still, there are common patterns like pointer-chasing in linked list traversal where you just keep getting an unknown i64 pointer, that you just need to bounds check...
> Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason.
I don't think things can end there. Machines can be scaled in ways human intelligence can't: if you have a machine that is vaguely of human level intelligence, if you buy a 10x faster GPU, suddenly you have something of vaguely human intelligence but 10x faster.
Speed by itself is going to give it superhuman capabilities, but it isn't just speed. If you can run your system 10 times rather than one, you can have each consider a different approach to the task, then select the best, at least for verifiable tasks.
https://www.usatoday.com/story/cars/news/2025/10/28/tesla-lo...
reply