Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is no reason that understandable and safe code will hit just 5% of a theoretical max. It may be closer to 95%.


No, but often it is far worse than 95%. A good example is random.randint() vs math.ceil(random.random() * N) in Python. The former is approximately 5x slower than the latter, but they produce effectively the same result with large enough values of N. This isn’t immediately apparent from using them or reading docs, and it’s only really an issue in hot loops.

Another favorite of mine is bitshifting / bitwise operators. Clear and obvious? Depends on your background. Fast as hell? Yes, always. It isn’t always needed, but when it is, it will blow anything else out of the water.


The compiler will almost always do the bit shifting, and better for you. Just switch python with rust and you’ll get both performance and safe code


Those Python snippets are both clean options :)

Bitwise is highly context dependent. There are simple usages like shifts to divide/multiply by 2. Idiomatic patterns that are clean when wrapped in good reusable and restricted macros, like for common registers manipulation in microcontrollers. And other uses that are anything from involuntary obfuscation to competition grade obfuscation.


> There are simple usages like shifts to divide/multiply by 2.

Clean code should not do that as the compiler will do that.

Clean code should just say what it wants to do, not replace that with low-level performance optimizations. (Also wasn't performance to be obtained from newer hardware?)


Fair point about shifting being superfluous and not clean!

I never said performance should come only from newer hardware. Only that it is possible to trade vs hardware/costs - unlike correctness and trust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: