Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Will streaming services ever stop over-compressing their content?

I have a top-of-the-line 4K TV and gigabit internet, yet the compression artifacts make everything look like putty.

Honestly, the best picture quality I’ve ever seen was over 20 years ago using simple digital rabbit ears.

You especially notice the compression on gradients and in dark movie scenes.

And yes — my TV is fully calibrated, and I’m paying for the highest-bandwidth streaming tier.

Not my tv, but a visual example: https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....



Content delivery costs a lot for streaming services. After content is produced, this is basically the only remaining cost. It’s not surprising that they would go to extreme measures in reducing bitrate.

That’s why, presumably, Netflix came up with the algorithm for removing camera grain and adding synthetically generated noise on the client[0], and why YouTube shorts were recently in the news for using extreme denoising[1]. Noise is random and therefore difficult to compress while preserving its pleasing appearance, so they really like the idea of serving everything denoised as much as possible. (The catch, of course, is that removing noise from live camera footage generally implies compromising the very fine details captured by the camera as a side effect.)

[0] https://news.ycombinator.com/item?id=44456779

[1] https://news.ycombinator.com/item?id=45022184


So:

1. camera manufacturers and film crews both do their best to produce a noise-free image 2. in post-production, they add fake noise to the image so it looks more "cinematic" 3. to compress better, streaming services try to remove the noise 4. to hide the insane compression and make it look even slightly natural, the decoder/player adds the noise back

Anyone else finding this a bit...insane?


> camera manufacturers and film crews both do their best to produce a noise-free image

This is not correct, camera manufacturers and filmakers engineer _aesthetically pleasing_ noise (randomized grains appear smoother to the human eye than clean uniform pixels). The rest is still as silly as it sounds.


Considering how much many camera brands boast their super low noise sensors, I'd still say a very common goal is to have as little noise as possible and then let the director/dop/colorist add grain to their liking. Even something like ARRI's in-camera switchable grain profiles requires a low-noise sensor to begin with.

But yes, there are definitely also many DPs that like their grain baked-in and camera companies that design cameras for that kind of use.


In any case, luma noise is not at all a massive issue, and it is a mistake to say that crews do their best to produce a noise-free image. They do their best to produce an image that they want to see, and some amount of luma noise is not a deal-breaker. There are routinely higher priorities that will take over using the lowest ISO possible. They can also be financial considerations, if you don’t have enough lights.

It is only an issue in content delivery.


> randomized grains appear smoother to the human eye than clean uniform pixels

Does this explain why i dislike 4K content on a 4K TV? Where some series and movies look too realistic, what in turn gives me a amateur film feeling (like somebody made a movie with a smartphone).


This sounds like what is called the soap opera effect:

https://en.wikipedia.org/wiki/Soap_opera_effect

Which is generally associated with excess denoisong rather than with excess grain.


Are you sure? That seems to be about motion interpolation. Not even about smooth motion, but about it being interpolated. Here the concern seems to be about how the individual, still images look, not anything about the motion between them.


> some series and movies look too realistic, what in turn gives me a amateur film feeling

This comment that I replied to is almost a textbook description of the soap opera effect.

The interpolation adds more FPS, which is traditionally a marker of film vs TV production.


Avatar 2 was particularly egregious with the poor interpolation


i think i've seen this effect on tv shot on cameras that were lower fps than the tv outputs. looks fake and bad and interpolated because it is.


Yes, just stop doing step 2 the way they're doing and instead if they _must_ do noise modify parameters for step 4 directly.


> 1. camera manufacturers and film crews both do their best to produce a noise-free image 2. in post-production, they add fake noise to the image so it looks more "cinematic"

This is patently wrong. The rest builds up on this false premise.


1.1: Google some low-light performance reviews of cinema cameras - you'll see that ISO noise is decreasing with every generation and that some cameras (like from Sony) have that as a selling feature.

1.2.: Google "how to shoot a night scene" or something like that. You'll find most advice goes something along the lines of "don't crank the ISO up, add artificial lighting to brighten shadows instead". When given a choice, you'll also find cinematographers use cameras with particularly good low-light performance for dark scenes (that's why dark scenes in Planet Earth were shot on the Sony A7 - despite the "unprofessional" form factor, it had simply the best high-ISO performance at the time)

2: Google "film grain effect". You'll find a bunch of colorists explaining why film grain is different from ISO noise and why and how you should add it artificially to your films.


I am reasonably confident that I know this subject area better than one could learn from three Google searches. “ISO noise” is not even a real term; you are talking about digital sensor noise, which can further be luminance noise or colour noise. Opinions about luma noise being “unaesthetic” are highly subjective, depending a lot on stylistic choices and specific sensor, and some would say luma noise on professional sensors has not been ugly for more than a decade. Generally, the main reason I don’t think your comments should be taken seriously is because you are basically talking only about a subset of photography, digital photography, while failing to acknowledge that (works are still shot on film in 2025 and will be in years to come).


"ISO noise" is a thing I've heard on a few different film sets and it's a convenient shorthand for "the noise that becomes really apparent only when you crank up the gain". Now that I'm thinking about it, there's a good chance this is more of a thing where I'm from, since our native language doesn't have different words for grain and noise, so we have to diffentiate between them with a suffix, which we then incorrectly also use in English. I guess on a film set with mostly native English speakers, noise and grain is clear enough.

Next, no shit aesthetics are subjective, I never said this is the one objective truth. I said this is a thing that many people believe, as evidenced by the plethora of resources talking about the difference between noise and grain, why tasteful grain is better than a completely clean image and how to add it in post.

And finally, come on, it's obvious to everyone in this thread that I'm referring to digital, which is also not "just a subset" it's by far the biggest subset.

So idk what your point is. Most things are shot digitally. Most camera companies try to reduce sensor noise. Most camera departments try to stick to the optimal ISO for their sensor, both for dynamic range and for noise reasons, adjusting exposure with other means. In my experience, most people don't like the look of sensor/gain/iso/whatever noise. Many cinematographers and directors like the look of film grain and so they often ask for it to be added in post.

Besides the many/most/some qualifiers possibly not matching with how you percieve this (which is normal, we're all different, watch different content, work in different circles...), where exactly am I wrong?


I can assure you that “ISO noise” is not a real term. I would take the word of whoever uses it with a grain of salt, movie set or not. Words have meanings.

> it's obvious to everyone in this thread that I'm referring to digital

It was blindingly obvious that you meant digital. That’s why I pointed this out. Without mentioning that it is only a concern with digital photography, your points become factually incorrect on more than one level; because the thread wasn’t talking specifically about digital photography, some of your points about noise don’t apply even if they were correct—which they aren’t, by your own admission that photography is subjective. Producing a noise-free image is not the highest priority for film crews (for camera manufacturers it is, but that’s because it means more flexibility in different conditions; it does not mean film crews will always prioritize whatever settings give them the lowest noise, there are plenty of higher priorities), and in some cases they choose to produce image with some noise despite the capability to avoid it.

Sorry, with your googling suggestions it just reads like a newbie’s take on subject matter.


[flagged]


No need to put words in my mouth. First, the points were made in context of photography as a whole, and in that context, without specifying that they only apply in digital, they are false. Second, even in digital they are false. That’s all.


This isn't strictly Netflix per se, it's part of the AV1 codec itself, e.g. https://github.com/BlueSwordM/SVT-AV1/blob/master/Docs/Appen...


Yes, I was not strictly correct, it is a feature of AV1, but Netflix played an active role in its development, in rolling out the first implementation, and in AV1 codec development overall.


Prior codecs all had film grain synthesis too (at least back to H264), but nobody used it. Partly because it was obviously artificial and partly because it was too expensive to render, since you had to copy each frame to apply effects to it.


AFAIK, FGS support is the exception instead of the rule.

h.264 only had FGS as an afterthought, introduced years after the spec was ratified. No wonder it wasn’t widely adopted.

VP9, h.265 and h.266 don’t have FGS.


Did they remove it from the specs? Wouldn't surprise me. But since it's just extra metadata and doesn't affect encoding you could still use the old spec.


I don’t think it was ever part of the spec. It would surprise me if it was, because once a spec has been finalized and voted on, changing it is complicated. Getting agreement the first time is already difficult enough.


It feels to me like there are two different things going on:

1. Video codecs like the denoise, compress, synthetic grain approach because their purpose is to get the perceptually-closest video to the original with a given number of bits. I think we should be happy to spend the bits on more perceptually useful information. Certainly I am happy with this.

2. Streaming services want to send as few bytes as they can get away with. So improvements like #1 tend to be spent on decreasing bytes while holding perceived quality constant rather than increasing perceived quality while holding bitrate constant.

I think one should focus on #2 and not be distracted by #1 which I think is largely orthogonal.


For #1 the problem with keeping grain in the compressed video is that it doesn't follow the motion of the scene so it makes it much more expensive to code future frames.


I disagree, because 1) complete denoising is simply impossible while preserving fine detail and 2) noise is a serious artistic choice—just like anamorphic flare, lens FOV with any distortion artifacts, chromatic aberration, etc. Even if it is synthetic film grain that is added in post, that has been somebody’s artful decision; removing it and simulating noise on the client butchers the work.


>Content delivery costs a lot for streaming services.

The hard disk space to store an episode of a show is $0.01. With peering agreements, the bandwidth of sending the show to a user is free.


>With peering agreements bandwidth of sending the show to a user is free.

I'm not sure why you think this, but it's one of the oddest things I've seen today.

The more streams you can send from a single server the lower your costs are.


Sure, but buying a server is not buying bandwidth. The point of my post is to counter the narrative that streaming video is very expensive.


There might be also copyright owners requirements, e.g. contract that limits the quality of material.


I recall Netflix saying that streaming cost was nothing compared to all other costs.


I am curious to see their breakdown. It seems very counter-intuitive of them to invest so much into reducing bitrate if the cost of delivery is negligible. R&D and codec design efforts cost money and running more optimized codecs and aggressive denoising cost compute.


Improved compression also saved on storage costs. We would have to hear them state the same about storage.


It probably is but the bean counters do not want to hear this, they want to cut everything to the point that it's just above the limit that consumers will accept before they throw in the towel and cancel their membership (ads, low quality compression, etc)


> You especially notice the compression on gradients and in dark movie scenes.

That's not a correctly calibrated TV. The contrast is tuned WAY up. People do that to see what's going on in the dark, but you aren't meant to really be able to see those colors. That's why it's a big dark blob. It's supposed to be barely visible on a well calibrated display.

A lot of video codecs will erase details in dark scenes because those details aren't supposed to be visible. Now, I will say that streaming services are tuning that too aggressively. But I'll also say that a lot of people have miscalibrated displays. People simply like to be able to make out every detail in the dark. Those two things come in conflict with one another causing the effect you see above.


> but you aren't meant to really be able to see those colors

Someone needs to tell filmmakers. They shoot dark scenes because they can - https://www.youtube.com/watch?v=Qehsk_-Bjq4 - and it ends up looking like shit after compression that assumes normal lighting levels.


> Someone needs to tell filmmakers. They shoot dark scenes because they can…

i disagree completely. i watch a movie for the filmmakers story, i don’t watch movies to marvel at compression algorithms.

it would be ridiculous to watch movies shot with only bright scenes because streaming service accountants won’t stop abusing compression to save some pennies.

> …ends up looking like shit after compression that assumes normal lighting levels.

it’s entirely normal to have dark scenes in movies. streaming services are failing if they’re using compression algorithms untuned to do dark scenes when soooo many movies and series are absolutely full of night shots.


I feel like there are probably cinematic film-making tricks you can use to imply a very dark scene without serving #111 pixels all over the screen.


As I said, I think the streamer services have too aggressive settings there. But that doesn't change the fact that the a lot of people have their contrast settings over tuned.

It should be noted, as well, that this generally isn't a "not enough bits" problem. There are literally codec settings to tune which decide when to start smearing the darkness. On a few codecs (such as VP1) those values are pretty badly set by default. I suspect streaming services aren't far off from those defaults. The codec settings are instead prioritizing putting bits into the lit parts of a scene rather than sparing a few for the darkness like you might like.


Video codecs aren't tuned for any particular TV calibration. They probably should be because it is easier to spot single bit differences in dark scenes, because the relative error is so high.

The issue is just that we don't code video with nearly enough bits. It's actually less than 8-bit since it only uses 16-235.


If an eye is able to distuinguish all 256 shades on a correctly calibrated display, then the content should be preserved.


>Will streaming services ever stop over-compressing their content?

Before COVID Netflix were at least using 8Mbps for 1080P content. With x264 / beamr it is pretty good, and even better on HEVC. Then COVID hit, every streaming service not just Netflix have excuses to lower their quality due to increased demand with limited bandwidth. Everything went down hill since then. Customer got used to lower quality I dont believe they ever bring it back up. Now it is only something like 3-5Mbps according to previous test posted on HN.

And while it is easy for HEVC / AV1 / AV2 to have 50%+ bitrate real world savings compared to H.264 saving at 0.5 - 4Mbps range, once you go pass that the savings begin to shrink rapidly to the point good old x264 encoder may perform better at much higher bitrate.


Most 1080p WEB-DLs are in the 6-8 Mbps range still, based on a quick glance.


Nice. There was a previous post on HN with showing the 5 to 6 samples he had were from 3 - 5Mbps.


Some lower bitrate shows are animated cartoons where far less bitrate is really needed. I’m sure you could find some awful compression botches if you really looked on a public tracker though.


Netflix also has a huge incentive to not use h265 and h264, licensing cost.


Seems like piracy is the way for you


Sadly many shows aren't released on BluRay anymore, so even piracy won't deliver better quality.


Piracy enables you to do things like debanding on playback, or more advanced video filtering to remove other compression issues.


I believe many sites still prefer Amazon webrips because their content is encoded at a higher bitrate than Netflix.


darn pirates should run the content through a super resolution model!


Not all video streaming services choose to use the same extremely low average video bit rate used by Netflix on some of their 4k shows.

Kate - Netflix - 11.15 Mbps

Andor - Disney - 15.03 Mbps

Jack Ryan - Amazon - 15.02 Mbps

The Last of Us - Max - 19.96 Mbps

For All Mankind - Apple - 25.12 Mbps

https://hd-report.com/streaming-bitrates-of-popular-movies-s...


Netflix has shown they're the mattress-company equivalent of streaming services.

You will be made to feel the springs on the cheapest plan/mattress, and it's on purpose so you'll pay them more for something that costs them almost nothing.


Are you sure about the black-areas-blocking? I remember a long time ago, when I was younger and had time for this kind of tomfoolery, I noticed this exact issue in my BlueRay backups. I figured I needed to up the bitrate, so I started testing, upping the bitrate over and over. Finally, I played the BlueRay and it was still there. This was an old-school, dual-layer, 100GB disc of one of the Harry Potter movies. Still saw the blocking in very dark gradients.


the downside of 8 bits per channel is you really don't have enough to get a smooth gradient over dark colors.


Yup. I think that’s exactly it.


I’m still so surprised Disney+ degrades their content/streaming service so much. Of all the main services I’ve tried (Netflix, Prime, Hulu, HBO) Disney+ has some of the worst over-compression, lip-sync, and remembering-which-episode-is-next issues for me. Takes away from the “magic”.


Check your settings. I experienced the same until I altered Apple TV settings that fixed Disney+. If I recall, the setting was Match content or Match dynamic range (not near tv right now to confirm exact name)


Netflix now this on their lowest paid tier as well. I had to upgrade to the 4K tier just to get somewhat-ok 1080p playback...


This is interesting because Disney+ when they started out were using much higher bitrate, 2nd only to Apple+.


Economically speaking, it doesn't make any sense for them to spend more on bandwidth and storage if they can get away with not spending more.


I don't quite follow why compression would cause this. Feels more like a side effect of adaptive HTTPS streaming protocol where it would automatically adjust based on your connection speed, and so aligns with any jitter on the wire. It could also be an issue with the software implementation because they need to constantly switch between streams based on bandwidth.


> side effect of adaptive HTTPS streaming

Adaptive streaming isn't really adaptive anymore. If you have any kind of modern broadband, the most adaptive it will be is starting off in one of the lower bitrates for the first 6 seconds before jumping to the top, where it will stay for the duration of the stream. A lot of clients don't even bother with that anymore; they look at the manifest, find the highest stream, and just start there.


As a little experiment, I'd like you to set up your own little streaming service on a server and see how much bandwidth it uses, even for just a few users. It adds up extremely quickly, with the actual using being quite surprising.

At the higher prices, I'd have to agree with you. If you pay for the best you should get the best.


> You especially notice the compression on gradients and in dark movie scenes.

That can happen at even the highest bitrates if "HDR" is not enabled in the video codec.

Related video: https://www.youtube.com/watch?v=h9j89L8eQQk


Whoa. That is the best thing i watched on YouTube in a long time. Thank you.


I pirate blu-ray rips. Pirates are very fastidious about maintaining visual quality in their encodings. I often see them arguing over artifacts that I absolutely cannot see with my eyes.


>the best picture quality I’ve ever seen was over 20 years ago using simple digital rabbit ears.

The biggest jump in quality was when everything was still analog over the air, but getting ready for the digital transition.

Then digital over the air bumped it up a notch.

You could really see this happen on a big CRT monitor with the "All-in-Wonder" television receiver PCI graphics adapter card.

You plugged in your outdoor antenna or indoor rabbit ears to the back of the PC, then tuned in the channels using software.

These were made by ATI before being acquired by AMD, the TV tuner was in a faraday cage right on the same PCB as the early GPU.

The raw analog signal was upscaled to your adapter's resolution setting before going to the CRT so you had pseudo better resolution than a good TV like a Trinitron. You really could see more details and the CRT was smooth as butter.

As the TV broadcaster's entire equipment chain was replaced, like camera lenses, digital sensors and signal processing they eventually had everything in place and working. You could notice these incremental upgrades until a complete digital chain was established as designed. It was really jaw-dropping. This was well in advance of the deadline for digital deployment, so the signal over-the-air was still coming in analog the same old way.

Eventually the broadcast signal switched to digital and the analog lights went out, plus the All-in-Wonder was not ideal with a cheap converter like analog TV's could get by with.

But it was still better than most digital TVs for a few years, then it took years more before you could see the ball in live sports as well as on a CRT anyway.

Now that's about all you've got for full digital resolution, live broadcasts from your local stations, especially live sports from a strong interference-free station over an antenna. You can switch between the antenna and cable and tell the difference when they're both not overly compressed.

The only thing was, digital engineers "forgot" that TV was based on radio (who knew?) so for the vast majority of "listeners" on the fringe reception areas who could get clear audio but usually not a clear picture if any, too bad for you. You're gonna need a bigger antenna, good enough to have gotten you a clear picture during the analog days. Otherwise your "clean" digital audio may silently appear on the screen as video, "hidden" within the sparse blocks of scattered random digital noise. When anything does appear at all.



Funny that they're marketing the supposed advantages of higher bitrates using pictures with altered contrast and saturation lol. I would expect the target audience to be somewhat affluent in the actual benefits? Then again, I wouldn't expect somebody like Scorsese to be a video compression nerd.

Also the whole "you can hear more with lossless audio" is just straight up a lie.


This has been more or less proven to be a complete scam, the quality isn’t any better than Blu-ray and in many cases worse.

The “best” quality of streaming you have is Sony Core https://en.wikipedia.org/wiki/Sony_Pictures_Core but it has a rather limited library.


"Not any better than Blu-ray" is the same as saying "much better than streaming."


I think there are a few examples where the bitrate is higher than a native rip however.


Fascinating.

Pricing, if I am reading the site correctly: $7k-ish for a server (+$ for local disks, one assumes), $2-5k per client. So you download the movie locally to your server and play it on clients scattered throughout your mansion/property.

Not out of the world for people who drop 10s of thousands on home theater.

I wonder if that's what the Elysium types use in their NZ bunkers.

No true self-respecting, self-described techie (Scotsman) would use it instead of building their own of course.


For the less affluent you can setup a Jellyfin media server and rip your own blu-rays with makemkv.


It's a little surprising to me that there generally aren't more subscription tiers where you can pay more for higher quality. Seems like free money, from people like you (maybe) and me.


You can already pay for 4K or "enhanced bitrate" but it's still relatively low bitrate and what's worse, this service quality is not guaranteed. I've had Apple TV+ downgrade to 1080p and lower on a wired gigabit connection so many times.


And on top of that a lot of streaming services don't go above 1080p on desktop, and even getting them to that point is a mess of DRM. I sometimes wonder if this is the YouTube powerhouse casting a bad shadow. As LTT says, don't try to compete with YouTube. They serve so much video bandwidth it's impossible to attempt. So all these kinda startup streaming services can't do 4k. Too much bandwidth.


I'm not surprised they don't offer an even higher tier. When you're pricing things, you often need to use proxies - like 1080p and 4K. It'd be hard to offer 3 pricing tiers: 1080p, 4K, 4K but actually good 4K that we don't compress to hell. That third tier makes it seem like you're being a bit fraudulent with the second tier. You're essentially admitting that you've created a fake-4K tier to take people's money without delivering them the product they think they're buying. At some point, a class-action lawsuit would use that as a sort of admission that you knew you weren't giving customers what they were paying for and that it was being done intentionally, both of which matter a lot.

Right now, Netflix can say stuff like "we think the 4K video we're serving is just as good." If they offer a real-4K tier, it's hard to make that argument.


YouTube does 1080p premium without much problem.


Ironically, piracy gives you yet again a better service. Thanks QxR.


Well, you'll be happy to learn that AV2 delivers 30% better quality for the same bitrate!


Isn't Sony Bravia Core supposed to be UHD Blu-ray quality?


and this is why I don't look down on those who choose to pirate bluray/4k content


4K Blu-ray is the top quality.


> Honestly, the best picture quality I’ve ever seen was over 20 years ago using simple digital rabbit ears.

That I find super hard to believe!


Why? ATSC is 19Mbps. A single 1080i video using that whole bitrate will look quite good.


Many confounding factors. That’s just one dimension of image quality. Others include things like the panel quality, production quality.


that's 19mbps including error correction. only 10 after, and that's using mpeg2 which is probably roughly equivalent to 6-7mbps av1


"A terrestrial (over-the-air) transmission carries 19.39 megabits of data per second (a fluctuating bandwidth of about 18.3 Mbit/s left after overhead such as error correction, program guide, closed captioning, etc.),"

https://en.wikipedia.org/wiki/ATSC_standards#MPEG-2


But that's the whole multiplex, right?

It could be a single channel, but usually you have many in the multiplex. I don't know how it works in the US, but for DVB-T(2) that's how it is.


Circa 2009 when analog TV was first shut-off in the US, each DTV station usually only had one channel, or perhaps a second basic one like a static weather radar on-screen. Some did have 3 or 4 sub-channels early on, but it was uncommon.

Circa 2019, after the FCC "repack" / "incentive auction" (to free-up TV channels for cellular LTE use) it became very common for each RF channel to carry 4+ channels. But to be fair, many broadcasters did purchase new, improved MPEG-2 encoders at that time, which do perform better with a lower bit-rate, so quality didn't degrade by a lot.


Wow, that's really different compared to what we have in EU, with up to 10 SD channels in a single multiplex.


You could technically fit 10 SD channels together in the US, except I recall that the government ("FCC") specifically REQUIRED (at least during initial rollout) at least one to be in highdef, unless you applied for a special waiver.

These days the FCC is unlikely to care, but I suspect viewership would suffer if any of the major broadcast networks downgraded their "main channel" signal to SD.

This explains the typical arrangement, as well as listing some of the extreme cases:

https://en.wikipedia.org/wiki/Digital_subchannel#United_Stat...

https://en.wikipedia.org/wiki/Digital_subchannel#Tradeoffs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: