Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Up until 4 years ago, I used to always allocate addresses specifically on bit boundaries. I did this mainly for one reason - aligning the bytes and trying not to waste CPU cycles.

For example, in a /24 network: 0-15 - network equipment 16-31 - some special gear 32-63 - mail servers or something like that 64-127 - split it up and align more things 128-191... you get the point.

... The ultimate goal was to hit everything on a bit boundary so the CPU wouldn't work so hard

These days, CPU power is so prominent, it doesn't matter. DHCP is probably the way to go.

IPv6 is here. We have the opportunity to really make CPU's fast again with IPv6!



What's a "bit boundary" in this context? And how does you subclassing scheme prevent wasted CPU cycles. at all?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: