Hacker Newsnew | past | comments | ask | show | jobs | submit | azakai's commentslogin

Some data on how badly he torched his consumer base: a Yale study says Tesla lost 1.26 million US sales due to Musk's politics.

https://www.usatoday.com/story/cars/news/2025/10/28/tesla-lo...


0% slower means "the same speed." The same amount of seconds.

10% slower means "takes 10% longer." 10% more seconds.

So 45% slower than 2 seconds is 1.45 * 2 = 2.9 seconds.


The data here is interesting, but bear in mind it is from 2019, and a lot has improved since.

It might just be that we evolved it first. Someone has to (if anyone does).


The Cranelift website does have that quote, but the linked paper says

> The resulting code performs on average 14% better than LLVM -O0, 22% slower than LLVM -O1, 25% slower than LLVM -O2, and 24% slower than LLVM -O3

So it is more like 24% slower, not 14%. Perhaps a typo (24/14), or they got the direction mixed up (it is +14 vs -24), or I'm reading that wrong?

Regardless, those numbers are on a particular set of database benchmarks (TPC-H), and I wouldn't read too much into them.


Even 14% would be unacceptably slow for a system language.

I don’t think that means it’s not doable, though.


The artifact's executions speed doesn't seem to be Cranelift's priority. They're instead focusing on compilation speed and security. Those are still useful in Rust for debug builds at least. That's when we need a quick turnaround time and as much verification as possible.


Interesting. Thanks for those numbers. I'd be interested in trying some real-world applications myself.


Sorry about binaryen.js - those JS/TS bindings could be a lot better, and better documented, but priorities are generally focused on improving optimizations in core Binaryen.

That is, most work in Binaryen is on improving wasm-opt which inputs wasm and outputs wasm, so any toolchain can use it (as opposed to just JS/TS).

But if someone had the time to improve the JS/TS bindings that would be great!


> garbage collection was implemented by my colleague Nick Fitzgerald a few years ago

The wasm features page says it is still behind a flag on wasmtime (--wasm=gc). Is that page out of date?


No, it's still behind a flag (and so transitively, exceptions are too, because we built exception objects on top of GC).

Our docs (https://docs.wasmtime.dev/stability-tiers.html) put GC at tier 2 with reason "production quality" and I believe the remaining concerns there are that we want to do a semi-space copying implementation rather than current DRC eventually. Nick could say more. But we're spec-compliant as-is and the question was whether we've implemented these features -- which we have :-)


Great, thanks for the info!


Wasm GC is entirely separate from Wasm Memory objects, so no, this does not help linear memory applications.


The special part is the "signal handler trick" that is easy to use for 32-bit pointers. You reserve 4GB of memory - all that 32 bits can address - and mark everything above used memory as trapping. Then you can just do normal reads and writes, and the CPU hardware checks out of bounds.

With 64-bit pointers, you can't really reserve all the possible space a pointer might refer to. So you end up doing manual bounds checks.


Hi Alon! It's been a while.

Can't bounds checks be avoided in the vast majority of cases?

See my reply to nagisa above (https://news.ycombinator.com/item?id=45283102). It feels like by using trailing unmapped barrier/guard regions, one should be able to elide almost all bounds checks that occur in the program with a bit of compiler cleverness, and convert them into trap handlers instead.


Hi!

Yeah, certainly compiler smarts can remove many bounds checks (in particular for small deltas, as you mention), hoist them, and so forth. Maybe even most of them in theory?

Still, there are common patterns like pointer-chasing in linked list traversal where you just keep getting an unknown i64 pointer, that you just need to bounds check...


> Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason.

I don't think things can end there. Machines can be scaled in ways human intelligence can't: if you have a machine that is vaguely of human level intelligence, if you buy a 10x faster GPU, suddenly you have something of vaguely human intelligence but 10x faster.

Speed by itself is going to give it superhuman capabilities, but it isn't just speed. If you can run your system 10 times rather than one, you can have each consider a different approach to the task, then select the best, at least for verifiable tasks.


Good point


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: