Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are wrong. The formalized concept of UB was introduced exactly because of this.

Let's take something as simple as divide by zero. Now, suppose you have a bunch of code with random arithmetic operations.

A compiler can not optimize this code at all without somehow proving that all denominators are non zero. What UB brings you is that you can optimize the program based on the assumption that UB never occurs. If it actually does, who cares, the program would have done something bogus anyway.

Now think about pointer dereferences, etc etc.



UB was not introduced to facilitate optimization, period. At the time the ANSI standard was being written, such optimizations didn't even exist yet. The edge case trickery around "assume behavior is always defined" didn't start showing up until the late 90's, a full decade and a half later.

UB was introduced to allow for variant/incompatible platform behavior (in your example, how the hardware treats a divide by zero condition) in a way that allowed pre-existing code to remain valid on the platform it was written, but to leave the core language semantics clear for future code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: