Most online video uses crazily overspecified h.264 bit rates for low complexity content. It's often possible to get 1080p well under 2 Mbit/s with little to no quality loss, and even lower by using 2 pass encoding where that's available. I'm not sure how things are with h.265 in a production setting, but at least for home use it seems to have much of the same flexibility
I get nervous when I see motion artefacts on netflix.
This means I'm pretty much always nervous when watching netflix.
I use a 1 gbps home connection and pay for the most expensive option netflix has.
I'd like to think that the whole concept of motion frames based prediction will disappear in future videos.
Motion JPEG XL would be like in the movies where they have Motion JPEG 2000, but ~35-40 % more dense. We could add usual delta frames without motion without compromising quality criteria. This would get us in the 0.3 BPP range.
4k at 30 Hz would be 3840 x 2160 x 24 x 0.3 should need about 60 mbps (~ 7.5 MB/s), still doable for home internet speeds and would be visually lossless, a better experience than home movie streaming is today. (free startup idea) :-)
60 mbps is easily beyond the limits of many home wifi network setups, and leaves the serving end with capacity for something like 166 users per 10 Gbit port. There are many additional costs to consider beyond the size of the home pipe
I believe this capacity roughly triples every five years -- so when building something for the far future things can look different in bandwidth/quality perspective.
I think it is better to think about it as XDG failing to respect a world that existed long before it did. $HOME/.* namespace belongs to the user, but contents are special. It's almost like complaining about Linux enforcing the naming of certain file extended attributes.
This is advocating for increasing the number of victims of CSAM to include source material taken from every public photo of a child ever made. This does not reduce the number of victims, it amounts to deepfaking done to children on a global scale, in the desperate hope of justifying nuance and ambiguity in an area where none can exist. That's not harm reduction, it is explicitly harm normalization and legitimization. There is no such thing (and never will be such a thing) as victimless CSAM.
This is hoping for some technical means to erase the transgressive nature of the concept itself. It simply is not possible to reduce harm to children by legitimizing provocative imagery of children.
Would be curious about OS / input/output size / hardware/virtualization tech, the only reason I can think for this would be tiny buffers with exorbitantly expensive context switches like you'd see on some older virtualization or e.g. puny escaped-a-VCR ARM chips
> all IBM wants to do with Redhat is turn it into IBM
Reading the histories of both companies, it seems to me that Redhat was always IBM. The acquisition was a marriage of separated twins more than anything else, the dominating culture in both organizations is largely sales and consulting driven, with public strategy not quite identical but strongly rhyming. The technology aspects (while perhaps once important) were far from the prime focus of either organization long prior to the merger. I think of IBM's acquisition more as a desire to access marketing expertise and established sales channels than anything to do with technologies, most of RH's open source technology is after all mostly just an advanced form of disguised marketing
I don't think that's right; Red Hat, culturally, does much more community engagement and open source development. I know several RH developers from working on Ceph-adjacent things and RH has been much more open to contributions and collaborations in academia and industry, plus they funded a significant chunk of research around storage in quite a few labs around the area. IBM never had that kind of culture imo, I would take RH anyday.
From what I know Red Hat devs are also very open in other areas of OS research and development, from the new io_uring, eBPF, etc. IBM is trying to stamp it out, but it's still there.
Maybe GP is referring to IBM having an equity stake in Redhat since the late '90s? Or, maybe because they both generate the majority of their revenue via support contracts with the same sorts of customers buying said contracts?
We manage this POS for some clients, and it's the worst distribution you can have. Stuff you can "just install" under any other popular distro is often silosed off into special packages you need to buy, despise it being OSS and "just available" on EPEL.
This project has already landed improvements in 3.10, and some much bigger improvements in 3.11. This work for 3.12 is "just" a continuation of that excellent effort:
25% number is from pyperformance benchmark suite, which you can replicate. Whether pyperformance is representative benchmark suite is another question.
It rubs people the wrong way but I always call out blanket statements. Generally languages get faster with each version and theres a lot of numbers thrown around, it doesn't mean your apps will get anywhere near that boost.
If you're lucky that one loop that concats strings got a few ms shaven off while that ORM youre using continues to grind the whole thing down.
It's mentioned in the readme - this is measuring the latency of cache coherence. Depending on architecture, some sets of cores will be organized with shared L2/L3 cache. In order to acquire exclusive access to a cache line (memory range of 64-128ish bytes), caches belonging to other sets of cores need to be waited on to release their own exclusive access, or to be informed they need to invalidate their caches. This is observable as a small number of cycles additional memory access latency that is heavily dependent on hardware cache design, which is what is being measured
Cross-cache communication may simply happen by reading or writing to memory touched by another thread that most recently ran on another core
Check out https://en.wikipedia.org/wiki/MOESI_protocol for starters, although I think modern CPUs implement protocols more advanced than this (I think MOESI is decades old at this point)
AMD processors also use a hierarchical coherence directory, where the global coherence directory on the IO die enforces coherence across chiplets and a local coherence directory on each chiplet enforces coherence on-die http://www.cs.cmu.edu/afs/cs/academic/class/15740-f03/www/le...
Alleged Russian bot here, I was torn apart in a thread about the European energy crisis for attempting a balanced perspective. It's sad to see, but not really worth the time trying to fight against it. The vast majority of folk seem actively happy to be misinformed, so long as that misinformation is consensus among their social circles. Better to use the information as some kind of indicator of the demographic you are interacting with, and act accordingly (most probably, find something better to do)
It’s not just Reddit, either. This same story has played out for me on Anime News Network’s forums, some gaming forums, etc.
Ultimately I just take my toys and go home - they don’t want me there, and it’s their legal right to say so.
But it results in a forum of views that appear to show unanimous consensus that {x} is good or {y} is bad, which is potentially dangerous for society at large and certainly bad for an open society of debate and knowledge sharing.
Even worse, I’ve seen instances where a blatantly bigoted, racist, or violent extreme view is allowed to stay (down voted to hell, of course) while my and others’ more nuanced or intelligent takes are scrubbed and banned.
I can only presume this is intentional with the effect of demonstrating that “only violent extremists are anti-{x} or pro-{y}, and you wouldn’t want to be associated with those people, now, would you?”
I only wish I had an example handy to share, because it’s been pretty blatant at times.
Ultimately the moral judgements associated with every political argument is getting ridiculous and (intentionally?) stifling debate while stirring unrest, and most of it feels artificial.
I've always been against "hate speech" bans or modifiers for this reason.
Real, actionable incitement to violence and bodily harm is already covered by law as not legal speech.
You're left with either superficial hateful sentiment or someone who has a nuanced position. It's fine to say "don't do that here, thanks." But, it's being used as a cudgel from the top down to constrain public discourse and manufacture consent as what is "hate" becomes more and more abstract and more and more inclusive. If PETA is suddenly in control of the "community guidelines" on a site, would sharing the fact that I had eggs for breakfast be a form of hate speech?
Intentionally stifling debate: Yes, because it's easier to paint your opponent as a monster/dishonest/Nazi/murderer than it is to actually answer their position in a way that other people can follow and understand. It's easier to shut down debate than it is to win debate.
> I was recently banned from a subreddit after one (neutral) comment for being a white supremacist.
I'm curious what you said. There is lots and lots of actual-neo-nazi recruitment material/copypasta out there that is deliberately written to sound "neutral." I can maybe see that if you accidentally drifted to close to that then you could get caught in the crossfire.