Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unless you are paid by the line that should either work as "expected" (i.e. wrap) or produce an error about a meaningless comparison.

Is it worth a few developer days worth of work to track down a hard to repro bug that only happens with hard to debug optimizations enabled?



No, which is one reason why I almost always use unsigned types in my C code, particularly in the context of data structure management where negative values are unnecessary and usually non-sensical.

GCC supports -fwrapv and -fno-strict-overflow; and I think clang supports both, too. I've never cared to use them because I only rarely use signed types. But some projects and programmers use those options habitually.

AFAIU, Rust panics by default on signed overflow. And even if it wraps, that's not unequivocally better. Unlike with enforced buffer boundary constraints, neither is clearly better than what C does. Arithmetic overflow is a common and serious issue in just about every language. Short of a compile-time constraint or diagnostic that triggers if the compiler cannot prove overflow is either explicitly checked or benign (that is, a negative number is no worse than a large positive number in the context of how the value is used), there's no obvious solution that really forecloses most exploit opportunities across the board.

Because so much code, regardless of language, has some unchecked signed integer overflow bug, if you panic you make it easy to DoS an application. And a DoS can sometimes turn into an exploit when you're dealing with cooperating processes. For example, you occasionally see bugs where an authentication routine fails open instead of failing closed when the authenticator is unreachable.

If you silently wrap signed overflow, all of a sudden the value is in a set (negative numbers) that might be completely unexpected. Even in so-called memory safe languages negative indices can leak sensitive information or erroneously select privileged state. For example, in some languages -1 selects the last element of an array. You can check for negative values explicitly, but multiplicative overflow can wrap around to positive numbers, which is no better than using an unsigned type; a check for a negative values is typically redundant work which adds unnecessary complexity--and unnecessary opportunity for mistakes--relative to sticking to unsigned types.

IMO, signed overflow is the worst option. I just don't see the point. The only three options I like for avoiding arithmetic overflow bugs, depending on language and context, are

1) Check for overflow explicitly (independently from array boundary constraints) and bubble up an error;

2) Carefully rely on unsigned modulo arithmetic;

3) Carefully rely on saturation arithmetic.

IMO the C standard's fault isn't in its refusal to make signed overflow defined or implementation-defined, but in providing neither a standard API for overflow detection, a construct for saturation semantics of integer types, nor a compilation mode to warn about unchecked signed overflow (e.g. something at least as useful as -Wno-strict-overflow in GCC).

Fortunately both GCC and clang have agreed on a standard API for overflow detection. That's something. But unfortunately it'll be years before you can consistently rely on those APIs without worrying about backward compatibility.


> AFAIU, Rust panics by default on signed overflow

Overflow of any integer type is considered a "program error", not undefined behavior. In debug builds, this is required to panic. In builds where it doesn't panic, it's well-defined as two's compliment wrapping.

You can also request explicit wrapping, saturating, etc behavior.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: