Hacker Newsnew | past | comments | ask | show | jobs | submit | matja's commentslogin

How can I contribute the data for the boards I own which are not on the site?

Dithering isn't only applied to 2D graphics, it can be applied in any type of spatial or temporal data to reduce the noise floor, or tune aliasing distortion noise to other parts of the frequency spectrum. Also common in audio.

Seems to me the real problem here is not the timezone (there's legitimate business needs to run something daily at a specific localtime...) but having multiple instances of a cron job that overlap, in which case it should wait until the previous is done or not start at all. At least prefix a job with "flock -n" if it doesn't/can't handle that.


Presumably because it looks identical to a Sandisk extreme pro 512gb, with grey boxes drawn over the logo.


The report has a heavily redacted interview with a submarine expert. Who directed Titanic and The Abyss.

They’re not good at redacting.


> No, our interposer only works on DDR4

Not surprising - even having 2 DDR5 DIMMs on the same channel compromises signal integrity enough to need to drop the frequency by ~30-40%, so perhaps the best mitigation at the moment is to ensure the host is using the fastest DDR5 available.

So - Is the host DRAM/DIMM technology and frequency included in the remote attestation report for the VM?


Interposers exist for every type of memory.

We use them during hardware development to look at the waveforms in detail well beyond what is needed to read the bits.

The reason their interposer doesn't work with DDR5 is because they designed it with DDR4 as the target, not because DDR5 is impossible to snoop.


The mental image I'm getting from your description is a high speed o-scope probe copy-pasted 80 times, which would obviously be insane. But keysight docs show what looks like an entirely normal PCB that literally interposes the BGA with trace wires on every pin, which looks far too simple for a multi GHz signal.

What do they actually look like and are there teardowns that show the analog magic?


> The mental image I'm getting from your description is a high speed o-scope probe copy-pasted 80 times, which would obviously be insane

It's a thing. It's expensive though. At some point you copy-paste scopes and trigger sync them.

Edit: https://www.teledynelecroy.com/oscilloscope/oscilloscopeseri...


$157k starting price for a 4 channel is a bit rich for the kind of work I do.


100%.

I wonder if these are full sampling scopes. In the past we had Equivalent Time Sampling scope(wideband front end, fast sampling slow rate ADC, a variable delay trigger) and many buses have repeatable test patterns that let you trigger that way. They were always a fairly niche device.


They're not snooping, they're modifying the address dynamically to cause aliasing.


All of that info is faked. You should never trust a cloud vm. That is why it is called "public cloud".


The attestation report is signed by a key in the PSP hardware, not accessible by any OS or software, which can then be validated with the vendor's certificate/public-key. If that can be faked, are you saying that those private keys are compromised?


I'm willing to bet if you ran terrorism-as-a-service.com on a protected VM, it wouldn't be secure for long, and if it really came down to it, the keys would be coughed up.


> If that can be faked, are you saying that those private keys are compromised?

As I understand it, the big idea behind Confidential Computing is that huge American tech multinationals AWS, GCP and Azure can't be trusted.

It is hardly surprising, therefore, that the trustworthiness of huge American tech multinationals Intel and AMD should also be in doubt.


Do you have a benchmark that shows the M1 Ultra CPU to memory throughput?


If you get IPv6 from your ISP you're usually not "behind NAT", even if your home router does NAT IPv4 or your ISP does CGNAT for IPv4.


I am a blue collar layperson (who only understands IPv4's limitation as a lack of total available IP addresses) that disables IPv6 (at the router level) for this exact reason — I feel like I am losing the little bit of control that being "behind NAT" allows on a private IP range/network (e.g. firewall; port mapping).

Obviously I still use Windows 7 Pro 64-bit as my only Microsoft computer — also have an Ubuntu dual Xeon (for LLM/crypto) and several Apple Silicon products (for general browsing).


You're misunderstanding the purpose of NAT, which is not a security boundary. Apple, for instance, has (or had) nearly all of their workstations on a public IP space.

You can still equally as effectively firewall and port map devices on public IPs as you can behind NAT -- and actually just a bit easier, since you're taking NAT out of the picture.


Do you have a gateway that doesn't do ipv6 firewalling (e.g. allow outgoing, only allow established incoming)? I was under the impression that even no-names manage to get that correct. Why would you need port mapping if not for NAT? Even with NAT, for home use I was always mapping port n to n.


Maybe the number of Windows 7 users has not changed, but those using Windows 10 and 11 are flocking to Linux. That'd be a net positive change in the Windows 7 percentage. :p


Those discussions I feel are fuelled by manufacturers like Sony saying [0] things like:

  Lossless compressed RAW:
  ...
  This is a popular format that occupies less space with minimal quality loss.
"minimal" "loss"? That's not "no loss", so what exactly is it?

[0] https://www.sony.co.uk/electronics/support/articles/00257081


I actually asked Sony Support about that. Their reply: "We can confirm that with Lossless compressed RAW there is a minimal quality loss. To have no impact on image quality I suggest using Uncompressed RAW files." Lossless isn't what it used to be.


Seems you just got 1st line robo-reply repeating what public resources state. Does not say much about actual compression algorithm Sony uses.


Their first reply was "we have passed your question to a higher technical team", then they came back four days later with the above reply. I was enquiring about the A7R mark V, which introduced the much needed "lossless" option. I think I asked because I wondered why they kept the uncompressed option and because experts warned that Sony did that before with "lossless" formats.


It is a shame that Sony has such an obsession with weird proprietary formats.


They read it lessloss I guess


That's insane. I honestly thought that lossless basically means to run zip over the file and not more.

Gotta hate companies these days with their dishonesty. "Lossless" means "lossy". "Unlimited" means "limited to 50GB".


A lot of cameras write "lossless" to mean "perceptually lossless". This is easy to do because ~12-bit ADCs have lots of noise in the low order bits.


"lossless" has always referred to compression, not sampling - but it seems camera manufacturers want to change that for marketing reasons.

Similarly (without starting an audiophile thread): Recording a vinyl record and compressing to a MP3 is "perceptually lossless" but will be different to compressing to a FLAC, never mind that the sampling output will always have random noise.


does removing EXPORT_SYMBOL(__kernel_fpu_end); [0] - which broke ZFS, count as removing stuff or changing the API?

AFAIK that change didn't add functionality or fix any existing issues, other than breaking ZFS - which GKH was absolutely fine with, dismissing several requests for it to be reverted, stating the "policy": [1]

> Sorry, no, we do not keep symbols exported for no in-kernel users.

[0] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux... [1] https://lore.kernel.org/lkml/20190111054058.GA27966@kroah.co...


Quite reasonable policy. Add a second line too:

> Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?

Why would you accommodate someone who explicitly went out of their way to not accommodate you?

It took many conflicts with bcachefs developer to reach this state. Olive branch has been extended again and again...


Sun doesn't exist anymore, and while openzfs is compatible with older versions of any of Oracle's life support Solaris, it's not the same ecosystem. Yes, the same licensing issues still exist, but openzfs has been developed by LLNL for Linux ever since it was called "zfs for Linux".


If that ecosystem have changed their values/opinions on that topic, the it wouldn't be an impossible task to dual-license it with a compatible license.

(Hard and tedious work, but not impossible).


The only entity that can change of ZFS license is Oracle and they obviously wouldnt do that.


They could rewrite all the code, and then change the license. Patents might still apply (but patents are short enough that I expect if any existed they have expired). However ZFS is a lot of code that is often tricky to get right. It will be really hard to rewrite in a way that the courts don't (reasonably/correctly) say wasn't a rewrite it was just moving some lines so you can claim ownership, but it is possible. By the time anyone knows enough about zfs that they could attempt this they are also too tainted by the existing code.

So of course they won't, but it isn't impossible.


I mean, bcachefs is basically the equivalent of rewriting all that code, without explicitly trying to be a clone. Same for btrfs


And how hard it is proves that zfs didn't make a bad choice in not trying the same. (though it would be interesting if either had a goal of a clone - that is same on disk data structures. Interesting but probably a bad decision as I have no doubt there is something about zfs that they regret today - just because the project is more than 10 years old)


It's supposedly the opinion of Oracle that the CDDL is GPL-compatible and that's the reason they won't do that.


I would not rely on the non-binding opinion of a company known for deploying its lawyers in aid of revenue generation



That wasn't exactly the answer.


Yeah, I agree based in rewatching that I've either misrecalled the original material, or I got it from another source.

I agree that based on that source, it's more like "meh, we don't really care" (until they do)


Oracle didn't follow that with DTrace. They changed the license away from CDDL when they integrated it into Oracle Linux.


The whole "Sun explicitly did not want" is an invention of one person at a conference, and opposite to what other insiders say


Ok, please explain. ZFS is licensed under CDDL, which is incompatible with GPL, aka Kernel license. Sun owned copyright and could easily change license or dual license. They didn't... for reasons (likely related to Solaris).


Sun leadership wanted to license OpenSolaris under GPLv3. However, GPLv3 work was dragging on at FSF and the license was not released in time. Moreover, there was opposition from Solaris dev team due to belief that GPLv3 will lock out reuse of OpenSolaris code (especially DTrace and ZFS) in Free/Net/OpenBSD.

CDDL was a compromise choice that was seen as workable for inclusion based especially on certain older views on what code will be compatible or not, and it was unclear and possibly expected that Linux kernel will move to GPLv3 (when it finally releases) which was seen as compatible with CDDL by CDDL drafters.

Alas, Solaris source release could not wait unclear amount of time for GPLv3 to be finalized


So... as I said "Sun explicitly did not want". They chose not to license it under GPLv2 or dual license GPLv2 + GPLv3 for... reasons.

> it was unclear and possibly expected that Linux kernel will move to GPLv3

In what world? Kernel was always GPLv2 without the "or later" clause. Kernel had would tens of thousands of contributors. Linus made it quite obvious by that time kernel will not move to GPLv2 (even in 2006).

Even if I gave them benefit of the doubt, GPLv3 was released in 2007. They had years to make license change and didn't. They were sold to Oracle in 2010.


It's the other way around. It's the GPL which is incompatible with the CDDL (and many other licences).

The CDDL is actually very permissive. You can combine it with anything, including proprietary licences.


Sun is dead and the ZFS copyright transferred to Oracle who then turned it into a closed source product.

The modern OpenZFS project is not part of Oracle, it's a community fork from the last open source version. OpenZFS is what people think of when they say ZFS, it's the version with support for Linux (contributed in large part by work done at Lawrence Livermore).

The OpenZFS project still has to continue using the CDDL license that Sun originally used. The opinion of the Linux team is the CDDL is not GPL compatible, which is what prevents it from being mainlined in Linux (it should be noted not everyone shares this view, but obviously nobody wants to test it in court).

It's very frustrating when people ascribe malice to the OpenZFS team for having an incompatible license. I am sure they would happily change it to something GPL compatible if they could, but their hands are tied: since it's a derivative work of Sun's ZFS, the only one with the power to do that is Oracle, and good luck getting them to agree to that when they're still selling closed source ZFS for enterprise.


The battle for ZFS could easily now devolve to IBM and Oracle.

Making /home into a btrfs filesystem would be an opening salvo.

IBM now controls Oracle's premier OS. That is leverage.


Several large distros use btrfs for /home.


Oracle databases absolutely cannot.


Reading the kernel mailing lists wrt/ bcachefs, it looked more like a cattle prod than an olive branch to me… Kent didn't do nothing other maintainers don't do except make one filesystem that doesn't get irrecoverably corrupted on brownout.

I'm just sorry for the guy and perhaps a little bit sorry for myself that I might have to reformat my primary box at some point…

Also unrelated, but Sun was a very open source friendly company with a wide portfolio of programs licensed under GNU licenses, without some of which Linux would still be useless to the general public.

Overall, designing a good filesystem is very hard, so perhaps don't bite the hand that feeds you…?


I have no idea if you read the right parts because that's not what happened at all.

The maintainer kept pushing new features at a time when only bugfix are allowed. He also acted like a child when he got asked to follow procedures. Feel sorry for his bad listening and communication abilities.


> The maintainer kept pushing new features at a time when only bugfix are allowed.

The "new features" were recovery features for people hit by bugs. I can see where the ambiguity came from.


"accommodate" in this instance would have been accomplished by doing nothing. The Linux kernel developers actively made this change.


Doing "nothing" in this case seems to be leaving technical debt in a code.

I am not kernel developer, but less exposed API/functions is nearly always better.

The removed comment of function even starts with: Careful: __kernel_fpu_begin/end() must be called with


Not sure if the thread is about how reasonable is the policy or if it is patently untrue that things get removed.


Though it's rich coming from a kernel lacking a better filesystem of its own.


I was curious how OpenZFS worked around that and found [0] & [1]

[0] https://github.com/openzfs/zfs/issues/8259

[1] https://github.com/openzfs/zfs/pull/8965


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: