Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is driven by programmers' insatiable thirst for performance. Compiler writers are constantly judged on benchmarks, and the only way to squeeze that last flop out of a piece of code is to take the specification to its extreme.

UB is always about optimisations and performance. Incidentally, this is why I don't think talking about "nasal demons" is productive. The compiler mostly just uses UB to assume: ah, this can't happen, so I can optimise it away. Often that means valid programs go faster. We wanted it: we got it.

From my limited experience (and it's been a while), -O0 (no optimisations) is really quite reliable, even if you do all kinds of UB shenanigans.



> This is driven by programmers' insatiable thirst for performance. Compiler writers are constantly judged on benchmarks, and the only way to squeeze that last flop out of a piece of code is to take the specification to its extreme.

Really? I've seen people switch between competing compilers for licensing reasons, platform support, features---but benchmark performance? Maybe blog posts suggesting that a new compiler wasn't ready.

Compiler writers judge themselves on benchmarks.


Competitive performance for your workload is basically the reason you would buy Intel's compiler, right?


It might not be true now because LLVM and GCC can generally put a commercial compiler 6 feet under, but if you're paying for a compiler you'd definitely want to choose the one that delivers the best performance (Money being no object)

No idea whether ICC is still worth paying for


No idea whether ICC is still worth paying for

From my experience, ICC is far more reluctant to exploit UB, yet still generates very good code.


ICC exploits the standard itself: it generates code that is technically incorrect.


What do you mean?


As Patrick mentions, ICC generates code that doesn't follow IEEE-754: https://news.ycombinator.com/item?id=20437375 (I should have mentioned I was talking about that rather than the C standard).


Oh I see, you're talking about floating point.

So basically ICC has -ffast-math (or -funsafe-math-optimizations) on by default, and you can turn it off with an explicit flag?

I see this as more of a philosophical difference than a material one since you can just add or remove the flag on either one...


IME, benchmarks aren't enough of an impetus to move between compilers, but are often a not-insignificant piece of what's considered when moving is otherwise motivated.


For C++, compilation time performance is the benchmark. At least it's why I started using clang.


>This is driven by programmers' insatiable thirst for performance. Compiler writers are constantly judged on benchmarks, and the only way to squeeze that last flop out of a piece of code is to take the specification to its extreme.

Ironically, strict aliasing rule (which is one of the most common causes of UB) makes writing fast programs much harder, because it forbids type punning (except via memcpy or unions).

BTW, according to WG14 mailings and minutes, the C committee is considering either relaxing it or creating a standard way to suppress it in C2X. I can't wait for it.


Writing fast programs while keeping the strict aliasing rule in mind isn't all that hard: compilers know the semantics of memcpy and can optimize your use to what it's "supposed to be in assembly".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: