Hacker Newsnew | past | comments | ask | show | jobs | submit | treffer's commentslogin

I had 8 IPs in a hetzner server years ago. One IP had an iptables rule to accept openvpn on any port.

My openvpn config was a long list of commonly accepted ports on either tcp or udp.

Startup would take a while but the number of times it worked was amazing.


Interesting. But who is OpenDevicePartnership?

Looking at the members on the repository this seems to be a Microsoft project?


Can one even do UEFI firmware projects without at least keeping Microsoft in the loop?

As far as I remmeber, they control the issuance of keys for bootloaders. Or is this project supposed to do away with that?


Already today you can remove the Microsoft keys from most mein board's UEFI and enroll your own. You can perfectly make your own UEFI implementation without Microsoft.


Except that many component manufacturers release their efi capsules signed with Microsoft PKI. So no, you can't fully remove them if you want to verify updates.


While "So no, you can't fully remove them if you want to verify updates" is a valid point, it's also an answer to a different question than the one asked.


You're completely missing the point here.


It's not that Microsoft controls the issuance, it's that their keys are pretty much guaranteed to be installed and thus getting your keys signed with their CA means you can use the pre-existing trust roots.

They are also the one party that is forcing freedom-enabling but formal standard breaking ability of resetting Platform Key, because Microsoft actually documents (or used to) a process to deploy systems signed with your own key as part of the highest security deployment documentation for enterprise customers


If you want to implement UEFI secure boot and verify existing signed objects then you need to incorporate Microsoft-issued certificates into your firmware, but that's very different from needing Microsoft to be in the loop - the certificates are public, you can download them and stick them in anything.


Microsoft even has their own Rust project for UEFI.

https://microsoft.github.io/mu/


Patina is a significant evolution of Mu.

Mu has some bits & pieces of Rust code and EDKII is still the upstream for Mu.

Patina is 100% Rust DXE Core implemented from spec.


Are you somehow related to either projects? You seem to have a good understanding of both pieces :)


Most of the top contributors are @microsoft.com so I would say it's a bit more than just "in the loop".


It's not open. It's not really about devices. And it's certainly not a partnership.

Open as in "open for business".

Compatibility as I understand it means "mixing this in a project is OK".

This is the case if the 2 license aren't at odds. Usually one license is stricter and you have to adhere to that one for the combined work.

A counter-example is GPLv2 and Apache license. Those 2 are incompatible. This was fixed with GPLv3 and you can often upgrade to GPLv3.

So no, this won't allow you to relicense as GPLv2. But you can use GPLv2 code.

This is especially relevant if you have such code redistribution clauses.


  > So no, this won't allow you to relicense as GPLv2. But you can use GPLv2 code.
I don't think your interpretation is correct:

   If the Licensee Distributes or Communicates Derivative Works or copies thereof based upon both the Work and another work licensed under a Compatible Licence, this Distribution or Communication can be done under the terms of this Compatible Licence
To me, this means that the combined work (e.g. EUPL + GPLv2) may be distributed under the "Compatible License" (GPLv2, in this case), but if you were to distribute only the EUPL-licensed work, you would have to distribute it under EUPL.

Besides, I do not think GPLv2 allows you to distribute a combined work under EUPL, for it is listed as GPL-Incompatible. The combined work would have to be distributed under a license compatible with both EUPL and GPLv2.


> Besides, I do not think GPLv2 allows you to distribute a combined work under EUPL, for it is listed as GPL-Incompatible. The combined work would have to be distributed under a license compatible with both EUPL and GPLv2.

AFAICT there is one aspect that seems to trip people when they come from a US-centric view of these licenses (including FSF): IIRC, in EU law a program can be made up of multiple licenses without each one affecting the other parts because the "virality" aspect of GPL (and similar aspects) does not work under the legal framework (because of how what is considered "combined work" under EU). There is an article[0] about why EUPL is not viral (both by choice and because of EU law) that explains it.

The How to use EUPL[1] document also spells it out:

---

But the definition of derivative works depends on the applicable law. If a covered work is modified, it becomes a derivative. But if the normal purpose of the work is to help producing other works (it is a library or a work tool) it would be abusive to consider everything that is produced with the tool as "derivative". Moreover, European law considers that linking two independent works for ensuring their interoperability is authorised regardless of their licence and therefore without changing it: no "viral" effect."

---

Note that in practice since 99.9% of the software in EU also goes outside the EU, including the US, the above doesn't matter much for (A)GPL software so even people (and companies) inside EU treat (A)GPL virality like in US. It is only when it comes to software meant to be used within EU alone (like government software) where the distinction matters.

[0] https://interoperable-europe.ec.europa.eu/collection/eupl/ne...

[1] https://interoperable-europe.ec.europa.eu/collection/eupl/ho...


That is true and an important aspect.

It still does not explain the cognitive dissonance of the EUPL.

1. Source Merging or Statically Linking

Since the EU recognizes that these form derivative works the compatibility provisions in the EUPL are useless. At least as long as they are not interpreted as re-licensing.

2. Dynamically linking or IPC or network requests

If the EU is serious that dynamically linking is not derivative the compatibility provisions in EUPL are not necessary.


I tried a few times as some BIOS have a hidden or disabled setting but I never got past a plain crash. Device and CPU vendor support for classic S3 is shrinking. E.g. on framework laptops the Intel CPU(!) does not officially support S3 sleep.

So I can understand that there is no option for it if all you can get is out of spec behavior and crashes.

Also note that it is incompatible with some secure boot and system integrity settings.


The product page lists EDK II. Is the code available anywhere? I can't see it in edk-platforms.....

I would love to have a UEFI I can compile....


They claim that the documentation and the open-source BIOS/kernel will be released some time in Q1 2025.

It remains to be seen when that happens, but if they keep their promise it will be indeed the first device with any Arm core better than Cortex-A78 that would have adequate documentation and software support.

For all their previous devices Radxa provides their complete schematics and PCB layout, like it was the standard rule many years ago, but now most Western companies have abandoned this good practice. So I expect that this will also be true for Radxa Orion.

For now, NVIDIA Orin is the best Arm-based device that has complete documentation and software support. It has been severely overpriced in the past, but now, when the price has dropped to $250 for an 8-GB Orin Nano developer kit, it has become very competitive.


I guess the reason is the screen. It's 320x240, and 0.3M is 640x480 (VGA).the secondary screen is even lower resolution (160x120).

It does work very well for this screen resolution.

And what else would you do with this media given it's a feature phone?


> And what else would you do with this media given it's a feature phone?

Send it to someone? It probably supports MMS. Or transfer it to your computer over USB-C/bluetooth.


Target customer group doesn't know how to use a computer, for the most part.

VGA is not that bad for phone pictures. If you scale down a photo from decent camera to VGA, it can look fine. The problem is that for VGA cameras, the resolution is the smallest issue - colors, sharpness, dybamic range are likely tragic.


> VGA is not that bad for phone pictures

You're right. It is the terrible jpeg compression which destroys everything.


No, it's not the JPEG compression.

This is an example of an image from my daughter's VGA camera: https://imgur.com/oMNkrUm

The image has ~100 KB which is plenty for this resolution. But the input data from the sensor is awful because the sensor and the optics are awful.


Yeah, people really don’t seem to understand that most kids currently in high school have only used a desktop computer to play PC games, if at all.


Vodafone Germany already turned off MMS. Others will follow.


It was hurting profits.


Is it? The article lists 2015 as the year where things improved a lot, 2017 is well past that. The numbers are low and even that's inflated due to recalls.

I've seen >>10 year old laptops where the battery is still good enough to go from charger to charger. Just go to ebay and check out 2009 MacBooks. That's ~15 years now.

I don't think this is unrealistic if you can live with the heavier degradation.


It just depends on what you use for management.

IIRC the /etc/network/interfaces does a reconfiguration that's pretty disruptive.

Things like brctl and ethtool worked on the fly without issues (note though that I mostly used Arista years ago).

It is usually non-disruptive if it gets applied as deltas. If your config tool does a teardown/recreate then that's disruptive. Within the bounds of ethernet and routing protocols (OSPF DR/DBR changes are disruptive, STP can be fun, ....).


Depends on what you are doing. But you can take the path of app / os images.

My home network is just openwrt, and I use make plus a few scripts and imagebuilder to create images that I flash, including configs.

For rpi I actually liked cloud-init, but it is too flaky for complicated stuff. In that case I would nowerdays rather dockerize it and use systemd + podman or a kubelet in standalone mode. Secrets on a mount point. Make it so that the boot partition of the rpi is the main config folder. That way you can locally flash a golden image.

Anything that mutates a server is brittle as the starting point is a moving target. Building images (or fancy tarballs like docker) makes it way more likely that you get consistent results.


This looks really good, I remember looking into BWT ad a kid. It's a true "wat" once you understand it.

And once you understand it, why does it compression so well? Because suffixes tend to have the same byte preceeding them.

Bzip2 is still highly useful because it is block based and can thus be scaled nearly linearly across cou cores (both on compress and decompress)! Especially at higher compression levels. See e.g. lbzip2.

Bzip2 is still somewhat relevant if you want to max out cores. Although it has a hard time competing with zstd.


Kamila Szewczyk is working on a bzip3 to improve the state-of-the-art in the domain of compressors based on Burrows-Wheeler:

https://github.com/kspalaiologos/bzip3

I’m keeping fingers crossed for the project. Especially given that the author is 19 and her best work is yet to come.


When I have first heard about bzip3, a few months ago, I have run a series of tests, comparing it with zstd and other compression programs.

In the beginning, I had been extremely impressed, because with the test archives that I happened to use bzip3 had outperformed zstd in all cases, at all possible settings for both, either in compression time at the same compressed size, or in the compressed size at the same compression time.

Nevertheless, later my initial enthusiasm had to be tempered, because I have found other test archives where the compression ratio achieved by bzip3 was more modest, falling behind other compression programs.

Therefore, the performance of bzip3 seems a little hard to predict, at least for now, but there are circumstances when it has excellent performance, with a much better compromise between compression speed and compressed size than the current alternatives.


Someone posted this [1] here recently which I found extremely informative. Unless I've missed something zstd outperforms bzip2 in all cases there?

[1] https://insanity.industries/post/pareto-optimal-compression/


There is one thing you can't with most algorithms: prallelize decompression. That's because most compression algorithms use sliding windows to remove repetitive sections.

And decompression speed also drops as compression ratio increases.

If you transfer over say a 1GBit link then transfer speed is likely the bottleneck as zstd decompression can reach >200MB/s. However if you have a 10GBit link then you are CPU bound on decompression. See e.g. decompression speed at [1].

Bzip2 is not window but block based (level 1 == 100kb blocks, 9 == 900kb blocks iirc). This means that, given enough cores, both compression and decompression can parallelize. At something like 10-20MB/s per core. So somwhere >10 cores you will start to outperform zstd.

Granted, that's a very very corner case. But one you might hit with servers. That's how I learned about it. But so far I've converged on zstd for everything. It is usually not worth the hassle to squeeze these last performance bits out.

[1] https://gregoryszorc.com/blog/2017/03/07/better-compression-...


That's possible with pzstd in any case. zstd upstream has a plan to eventually support parallel decompression natively but hasn't prioritized it given the complexity and lack of immediate need.

https://github.com/facebook/zstd/issues/2499#issuecomment-78...


The issue talks about one vs. multiple frames. That's exactly the issue. It's not a matter of complexity, it's a matter of bad compromises.

The issue can be easily played through. The most simplistic encoding where the issue happens is RLE (run length encoding).

Say we have 1MB of repeated 'a'. Originally 'aaa....a'. We now encode it as '(length,byte)', so the stream turns into (1048576,'a').

Now we would want to parallelize it over 16 cores. So we split the 1MB into 16 64k chunks and compress each chunk independently. This works but is ~16x larger.

Similar things happen for window based algorithms. We encode repeated content as (offset,length), referencing older occurrences. Now imagine 64k of random data, repeated 16 times. The parallel version can't compress anything (16x random data), the non-parallel version will compress it roughly 16:1.

There is a trick to avoid this downside. The lookup is not unlimited, there is a maximum window size to limit memory usage. For compatibility it's 8MB for zstd (at level 19), but you can go all the way to 2GB (ultra, 22, long=31). As you make chunks significantly larger than the window you are only loosing out on the new ramp up. E.g. if you use 80MB chunks then you have a bit less than 10% of the file encoded worse. You could still double your encoded size with a well crafted file. If you don't care about parallel decompression then you are able to only parallelize parts like the lookup search. This gives good speedup, but only on compression. That's the current parallel compression approach in most cases (iirc) leading to a single frame, just faster. The problem is that back-references can only be resolved backwards.

The whole problem is not implementation complexity. It's something you algorithmically can't do with current window based approaches without significant tradeoffs on memory consumption, compression ratio and parallel execution.

For bzip2 the file is always chunked at 900kb boundaries at most. Each block is encoded independently and can be decoded independently. It avoids this whole tradeoff issue altogether.

I would also disagree with "no need". Zstd easily outperforms tar, but even my laptop SSD is faster than the zstd speed limits. I just don't have the _external_ connectivity to get something onto my disk fast enough. I've also worked with servers 10 years ago where the PCIe bus to the RAID card was the limiting factor. Again easily exceeding the speed limits.

Anyway, as mentioned a few times it's an odd corner case. And one can't go wrong by choosing zstd for compression. But it is real fun to dig into these issues and look at them, I hope this sparks some interest in it!


My point is, it's already possible to use multiple independently compressed (and decompressable) frames with zstd if you really want to.

It's even in the zstd repo, under a "contrib" implementation

https://github.com/facebook/zstd/blob/87af5fb2df7c68cc70c090...

That does, of course, require that you compress it into multiple frames to begin with, which could be a problem if you don't control the source of the compressed files, because the default is a single frame. In theory if everyone used pzstd to compress their files, it would be strictly superior to BZ2 in nearly every circumstance. As it is, you do have to go out of your way to do that.

But I don't think that necessarily means the single-frame choice by default is a bad tradeoff. It's better in most circumstances. And if they do eventually figure out a reasonably efficient way to handle intra-frame parallel decompression, then it's just gravy.


There are three kinds of people in my experience:

1. bzip2 -1

2. bzip2 -9

3. What's bzip2?

A huge amount of time is spent optimizing for #3. Maybe instead we should offer descriptive commands that convey the goals. Say, "squash", "speedup", and "deflate", or some such.


I think I grok bzip2 fairly well, but I can’t figure out what your descriptive commands would actually do :S


I still struggle to get my head around BWT. I understand what it does conceptually and why it helps, and I can read code that implements it, but I don't fully get it - there's a mental disconnect for me somewhere. Mainly, I can't convince myself that computing the inverse transform is possible.

It's one of those algorithms that I can say for sure I'd never have been able to come up with on my own.


I think it really helps to stop thinking about the string as a linear sequence with a beginning and end, and instead consider an unbroken loop of characters. Literally imagine the string written into a circle like the letters on an Enigma rotor.

Then you can consider the construction of all the substrings of length 2, length 3, and so on. You may also wish to consider the same induction, but working backwards from its conclusion. Start by considering the set of n length substrings, then the n-1 length substrings, etc.

Either way, your objective should be to convince yourself that you can reconstruct the whole ring from the BWT. At this point it is not hard to make the final leap to understand how it can be applied to regular strings.


It's one of those things you saturate your brain with for a few days, then put it down, and two weeks later in the shower you figure it out.


Being block based means that recovery from file damage is easy. Bzip2 ships with such a recovery utility.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: