Hacker Newsnew | past | comments | ask | show | jobs | submit | more 65a's commentslogin

640kb should be enough for anyone!


Or maybe we are repeating the experiment of the 1990s/2000s and can't wait to see the results confirmed?


The base unit is actually a single bit, so why aren't we talking about decabits instead of bytes, following the SI argument?


Networks are often measured in bits.

It's more about tradition. And sometimes a sprinkle of marketing lies (it's cheaper to make 1024 GB SSD than 1 TiB SSD).

Disk /dev/nvme0n1: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors


Because deca isn't a good SI prefix, and only gets grandfathered in. Also because some people like weird derived units like moles. Fwiw, in many places ISPs advertise speeds in megabits per seconds, no doubt to sounds right times faster than they are.


> advertise speeds in megabits per seconds

"bits per second" is what it always has been for computer communication.

When I got started, I got a 300 bits/second modem, which later got upgraded to 1200/75 bits/s and then 2400 bits/second.

Later on, we had 57,6 kbit/second modems, 64 kbit/s ISDN lines, 2 Mbit/s ADSL, etc.

All the speedtest websites I've seen also use Mbit/s.

But sure, if I'm downloading the latest Ubuntu distro, I want to know the current speed in Megabytes/s.


I refuse to say mebibyte or whatever alternative unit. 1024 bytes is one kilobyte, and 1000 kilobytes is not a useful unit (and so on). As far as being a conspiracy by hard drive manufacturers, Western Digital did settle the case rather than win: https://arstechnica.com/uncategorized/2006/06/7174-2/


specifically their 80GB WD800VE [...] that had only 79,971,254,272 bytes (74.4GB)

That's even less than the decimal size, so that's absolutely false advertising. I'm not surprised they settled.


Only with additional hardening between the container and the kernel and hardware itself.


There would be an ironic twist if nvidia (who first competed with AMD nee ATI) was forced to license their IP to intel for national security reasons, but it would rhyme with history (x86 licenses to AMD) and benefit the consumer.


Why couldn't Intel fab GPUs for Nvidia? Taiwan just had 2 7+ earthquakes this morning, so we can add that to the geopolitical risk of having a large number of the world's most advanced chip fabs on the island of Taiwan.


Other than process specifics, they could, and that's exactly what happened when the 1980s US Government got worried about a single CPU supplier. The result was (to some extent) modern cheap computing.


Disgusting abuse of the democratic process to halt scientific and technological progress in the name of making one sketchy man rich.


I agree. It's a shame the PEOPLE do not have any say in this matter. They should be able to vote also, and one of the options should be "outright ban on AI".


I agree if we extend it to mathematics and the use of simple machines.


It's somewhat regional, and it means to hunt down the target at the expense of everything else, as a dedicated hunting dog might.


Is there a proper reverse engineering of the payload yet?


Isn't u-root also basically this?


It looks similar to u-root https://github.com/u-root/u-root, yes, used as part of host firmware. There's a description of u-root in chapter 6 of https://link.springer.com/book/10.1007/978-1-4842-7939-7, too.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: