Dithering isn't only applied to 2D graphics, it can be applied in any type of spatial or temporal data to reduce the noise floor, or tune aliasing distortion noise to other parts of the frequency spectrum. Also common in audio.
Seems to me the real problem here is not the timezone (there's legitimate business needs to run something daily at a specific localtime...) but having multiple instances of a cron job that overlap, in which case it should wait until the previous is done or not start at all. At least prefix a job with "flock -n" if it doesn't/can't handle that.
Not surprising - even having 2 DDR5 DIMMs on the same channel compromises signal integrity enough to need to drop the frequency by ~30-40%, so perhaps the best mitigation at the moment is to ensure the host is using the fastest DDR5 available.
So - Is the host DRAM/DIMM technology and frequency included in the remote attestation report for the VM?
The mental image I'm getting from your description is a high speed o-scope probe copy-pasted 80 times, which would obviously be insane. But keysight docs show what looks like an entirely normal PCB that literally interposes the BGA with trace wires on every pin, which looks far too simple for a multi GHz signal.
What do they actually look like and are there teardowns that show the analog magic?
I wonder if these are full sampling scopes. In the past we had Equivalent Time Sampling scope(wideband front end, fast sampling slow rate ADC, a variable delay trigger) and many buses have repeatable test patterns that let you trigger that way. They were always a fairly niche device.
The attestation report is signed by a key in the PSP hardware, not accessible by any OS or software, which can then be validated with the vendor's certificate/public-key. If that can be faked, are you saying that those private keys are compromised?
I'm willing to bet if you ran terrorism-as-a-service.com on a protected VM, it wouldn't be secure for long, and if it really came down to it, the keys would be coughed up.
I am a blue collar layperson (who only understands IPv4's limitation as a lack of total available IP addresses) that disables IPv6 (at the router level) for this exact reason — I feel like I am losing the little bit of control that being "behind NAT" allows on a private IP range/network (e.g. firewall; port mapping).
Obviously I still use Windows 7 Pro 64-bit as my only Microsoft computer — also have an Ubuntu dual Xeon (for LLM/crypto) and several Apple Silicon products (for general browsing).
You're misunderstanding the purpose of NAT, which is not a security boundary. Apple, for instance, has (or had) nearly all of their workstations on a public IP space.
You can still equally as effectively firewall and port map devices on public IPs as you can behind NAT -- and actually just a bit easier, since you're taking NAT out of the picture.
Do you have a gateway that doesn't do ipv6 firewalling (e.g. allow outgoing, only allow established incoming)? I was under the impression that even no-names manage to get that correct. Why would you need port mapping if not for NAT? Even with NAT, for home use I was always mapping port n to n.
Maybe the number of Windows 7 users has not changed, but those using Windows 10 and 11 are flocking to Linux. That'd be a net positive change in the Windows 7 percentage. :p
I actually asked Sony Support about that. Their reply: "We can confirm that with Lossless compressed RAW there is a minimal quality loss. To have no impact on image quality I suggest using Uncompressed RAW files." Lossless isn't what it used to be.
Their first reply was "we have passed your question to a higher technical team", then they came back four days later with the above reply. I was enquiring about the A7R mark V, which introduced the much needed "lossless" option. I think I asked because I wondered why they kept the uncompressed option and because experts warned that Sony did that before with "lossless" formats.
"lossless" has always referred to compression, not sampling - but it seems camera manufacturers want to change that for marketing reasons.
Similarly (without starting an audiophile thread): Recording a vinyl record and compressing to a MP3 is "perceptually lossless" but will be different to compressing to a FLAC, never mind that the sampling output will always have random noise.
does removing EXPORT_SYMBOL(__kernel_fpu_end); [0] - which broke ZFS, count as removing stuff or changing the API?
AFAIK that change didn't add functionality or fix any existing issues, other than breaking ZFS - which GKH was absolutely fine with, dismissing several requests for it to be reverted, stating the "policy": [1]
> Sorry, no, we do not keep symbols exported for no in-kernel users.
Sun doesn't exist anymore, and while openzfs is compatible with older versions of any of Oracle's life support Solaris, it's not the same ecosystem. Yes, the same licensing issues still exist, but openzfs has been developed by LLNL for Linux ever since it was called "zfs for Linux".
If that ecosystem have changed their values/opinions on that topic, the it wouldn't be an impossible task to dual-license it with a compatible license.
They could rewrite all the code, and then change the license. Patents might still apply (but patents are short enough that I expect if any existed they have expired). However ZFS is a lot of code that is often tricky to get right. It will be really hard to rewrite in a way that the courts don't (reasonably/correctly) say wasn't a rewrite it was just moving some lines so you can claim ownership, but it is possible. By the time anyone knows enough about zfs that they could attempt this they are also too tainted by the existing code.
And how hard it is proves that zfs didn't make a bad choice in not trying the same. (though it would be interesting if either had a goal of a clone - that is same on disk data structures. Interesting but probably a bad decision as I have no doubt there is something about zfs that they regret today - just because the project is more than 10 years old)
Ok, please explain. ZFS is licensed under CDDL, which is incompatible with GPL, aka Kernel license. Sun owned copyright and could easily change license or dual license. They didn't... for reasons (likely related to Solaris).
Sun leadership wanted to license OpenSolaris under GPLv3. However, GPLv3 work was dragging on at FSF and the license was not released in time. Moreover, there was opposition from Solaris dev team due to belief that GPLv3 will lock out reuse of OpenSolaris code (especially DTrace and ZFS) in Free/Net/OpenBSD.
CDDL was a compromise choice that was seen as workable for inclusion based especially on certain older views on what code will be compatible or not, and it was unclear and possibly expected that Linux kernel will move to GPLv3 (when it finally releases) which was seen as compatible with CDDL by CDDL drafters.
Alas, Solaris source release could not wait unclear amount of time for GPLv3 to be finalized
So... as I said "Sun explicitly did not want". They chose not to license it under GPLv2 or dual license GPLv2 + GPLv3 for... reasons.
> it was unclear and possibly expected that Linux kernel will move to GPLv3
In what world? Kernel was always GPLv2 without the "or later" clause. Kernel had would tens of thousands of contributors. Linus made it quite obvious by that time kernel will not move to GPLv2 (even in 2006).
Even if I gave them benefit of the doubt, GPLv3 was released in 2007. They had years to make license change and didn't. They were sold to Oracle in 2010.
Sun is dead and the ZFS copyright transferred to Oracle who then turned it into a closed source product.
The modern OpenZFS project is not part of Oracle, it's a community fork from the last open source version. OpenZFS is what people think of when they say ZFS, it's the version with support for Linux (contributed in large part by work done at Lawrence Livermore).
The OpenZFS project still has to continue using the CDDL license that Sun originally used. The opinion of the Linux team is the CDDL is not GPL compatible, which is what prevents it from being mainlined in Linux (it should be noted not everyone shares this view, but obviously nobody wants to test it in court).
It's very frustrating when people ascribe malice to the OpenZFS team for having an incompatible license. I am sure they would happily change it to something GPL compatible if they could, but their hands are tied: since it's a derivative work of Sun's ZFS, the only one with the power to do that is Oracle, and good luck getting them to agree to that when they're still selling closed source ZFS for enterprise.
Reading the kernel mailing lists wrt/ bcachefs, it looked more like a cattle prod than an olive branch to me… Kent didn't do nothing other maintainers don't do except make one filesystem that doesn't get irrecoverably corrupted on brownout.
I'm just sorry for the guy and perhaps a little bit sorry for myself that I might have to reformat my primary box at some point…
Also unrelated, but Sun was a very open source friendly company with a wide portfolio of programs licensed under GNU licenses, without some of which Linux would still be useless to the general public.
Overall, designing a good filesystem is very hard, so perhaps don't bite the hand that feeds you…?
I have no idea if you read the right parts because that's not what happened at all.
The maintainer kept pushing new features at a time when only bugfix are allowed. He also acted like a child when he got asked to follow procedures. Feel sorry for his bad listening and communication abilities.
reply