Hacker Newsnew | past | comments | ask | show | jobs | submit | fweimer's commentslogin

How would this work? Wouldn't all these drives start loosing data at roughly at the same time?

Yes, but different pieces of data. The stored parity allows you to reconstruct any piece of data as long as it is only lost on one of the drives (in the single parity scenario).

The odds of losing the same piece of data on multiple drives is much lower than losing any piece of data at all.


But the data is not disappearing, it's corrupted - so how do you know which bits are good and which are not?

You typically need to maintain much newer C++ compilers because things from the browser world can only be maintained through periodic rebases. Chances are that you end up building a contemporary Rust toolchain as well, and possibly more.

(Lucky for you if you excluded anything close to browsers and GUIs from your LTS offering.)


If you can find the patches, it's fun to tweak them in the most conservative way possible to apply to the old code base.

However, things get annoying once something ends up on some priority list (like the Known Exploited Vulnerabilities list from CISA), you ship the software in a much older version, and there is no reproducer and no isolated patch. What do you do then? Rebase to get the alleged fix? You can't even tell if the vulnerability was present in the previous version.


> However, things get annoying once something ends up on some priority list (like the Known Exploited Vulnerabilities list from CISA), you ship the software in a much older version, and there is no reproducer

There are known exploited vulnerabilities without PoC? TIL and that doesn't sound fun at all indeed.


Distribution maintainers who do the backports do not necessarily have access to this kind of information. My impression is that open sharing of in-the-wild exploits isn't something that happens regularly anymore (if it ever did), but I'm very much out of the loop these days.

And access to the reproducer is merely a replacement for lack of public vulnerability-to-commit mapping for software that has a public version control repository.


This guy backports.

It's more surprising to me that software isn't portable enough that you can develop locally on x86-64. And then have a proper pipeline that produces the official binaries.

Outside the embedded space, cross-compilation really is a fool's errand: either your software is not portable (which means it's not future-proof), or you are targeting an architecture that is not commercially viable.


> It's more surprising to me that software isn't portable enough that you can develop locally on x86-64. And then have a proper pipeline that produces the official binaries.

This is what we largely do - my entire team other than me is on x86, but setting up the ARM pipelines (on GitHub Actions runners) would have been a real pain without being able to debug issues locally.


No two development groups agree on the desired features, so it would have to be a custom compiler plugin.

You could start with a Perl script that looks at the output of “clang++ -Xclang -ast-dump” and verifies that only permitted AST nodes are present in files that are part of the project sources.


For sure no two groups want the same subset but is there no "standard way" to opt in / out in the ecosystem? It's strange that there are large orgs like Google enforcing style guidelines but manual code reviews are required to enforce it. (or may be my understanding of that's enforced is wrong)

There are whole industries built around the notion that “License: MIT” is everything that is required to meet the notification requirements in the license. So I wouldn't say that the MIT license is easy to understand.

It depended on whether the programs were distributed together. So it wasn't okay to link against OpenSSL for GNU/Linux distributions (although interpretations varied). For a time, this was used to push GNUTLS as an alternative to OpenSSL. But it was generally agreed upon that it was okay to link against CryptoAPI on Windows because you would not distribute Windows code along with your GPL binaries.

They can switch from LGPLv2.1 to GPLv2 or GPLv3 for future development because the license has an explicit provision for that.


What does the other side look like? How would you go about finding people interested in this space, and who are not yet part of the LLVM and GNU toolchain communities (at least not in a very visible way)?


All those LLVM forks need maintainers, too.

Then there are the people building compilers accidentally, like in the <xyz>-as-code space. Infrastructure automation deal with grammars, symbol tables, (hopefully) module systems, IRs, and so forth. Only the output is very different.

And of course the toolchain space is larger than just compilers. Someone needs to maintain the assemblers, linkers, debuggers, core runtime libraries. If you are building a Linux distribution, someone has to figure out how the low-level pieces fit together. It's not strictly a compiler engineering role, but it's quite close. Pure compiler engineering roles (such as maintaining a specific register allocator) might be quite rare.

It's a small field, but probably not that obscure. Despite the efficiency gains from open-source compilers, I don't think it's shrinking.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: