It's about the assumptions made during the standardization.
Compared to 30 years ago, we now have better knowledge and statistics about what low level primitives are useful in a codec.
E.g. jpeg operates on fixed 8×8 blocks independently, which makes it less efficient for very large images than a codec with variable block size. But variable block size adds overhead for very small images.
An other reason can be common hardware. As hardware evolves, different hardware accelerated encoding/decoding techniques become feasible that gets folded into the new standards.
Something that I learned about 10 years ago when bandwidth was still expensive is that you can make a very large version of an image, set the jpeg compression to a ridiculously high value, then scale the image down in the browser. The artifacts aren't as noticeable when the image is scaled down, and the file size is actually smaller than what it would be if you encoded a smaller image with less compression.
This “huge JPEG at low quality” technique has been widely known for years. But it is typically avoided by larger sites and CDNs, as it requires a lot more memory and processing on the client.
Depending on the client or the number of images on the site the huge JPEG could be a crippling performance issue, or even a “site doesn’t work at all” issue.
Interesting. I've never heard of this, but it makes some sense: the point of lossy image compression is to provide a better quality/size ratio than downscaling.
Compared to 30 years ago, we now have better knowledge and statistics about what low level primitives are useful in a codec.
E.g. jpeg operates on fixed 8×8 blocks independently, which makes it less efficient for very large images than a codec with variable block size. But variable block size adds overhead for very small images.
An other reason can be common hardware. As hardware evolves, different hardware accelerated encoding/decoding techniques become feasible that gets folded into the new standards.