I built a (research) library a few years ago to rewrite ELF binaries; our research projects ran into a lot of limitations with doing incremental patches to a binary (ELF has a lot of redundant representations of the same data). For us, parsing the binary into a normalized representation, modifying that, and re-serializing worked — we could make more intrusive changes to the binary, and (almost? I don’t recall anything breaking) everything in the Debian repos still ran after the binaries has been rewritten.
I expect the library is now woefully out of date, and documentation is mostly in the form of conference talk slides:
I don't think the pilot factors in that much into the cost of operating a helicopter. Even a small helicopter (e.g. Robinson R-44 which is a bare bones trainer) needs a ~250k complete overhaul every 2200 hours/12 years in service. Add $60 dollars per hour in fuel and 60-100 per hour of miscellaneous other maintenance, and your variable costs quickly approach 300/hr. Add in all the fixed overhead, and a tiny helicopter that can take 2 passengers and limited baggage will run $400 dollars per hour easily (which is about what they rent for). A bigger turbine helicopter is significantly more expensive.
Most of this is because helicopters are complex machines (you're spinning really big blades pretty fast. Then using the blades + bearing to also lift the weight of the helicopter + more. Oh and you're changing the pitch angle of the blades. Oh you mean changing the pitch angles of the blades WHILE THEY GO AROUND...) and any failure in any of these parts is usually fatal so they have to be built and maintained to a very high degree of reliability.
Airplanes are much simpler (the spinning propeller attached to an engine is one part. The wing generating lift is another part. the flight controls are yet another part) and have more opportunities for redundancy ( a wing has multiple spars, and is attached to the airplane with many bolts. All helicopter blades meet in one hub, which is attached on one axis).
Combine that with aircraft scaling up more (you can build 500 person aircraft, but only 20 person or so helicopters), going much faster (a 120 dollar/hr propeller plane will outrun many/most helicopters) and the cost to go a given distance by plane will always be much cheaper than a helicopter.
I'm less comparing the helicopter approach with airplanes, and more comparing them to multirotors, where I think a lot of the autonomous flight hype is coming from, and where the use cases substitute rather than complement each other. In that sense, the fixed costs are less "baked-in" to the product. You can swap out helicopter blades for better composites, and add in a glass cockpit, but the fundamental control system still holds. You can't really change the number of motors in a multicopter without a full-scale overhaul of the avionics suite. And I sure hope the FAA wouldn't let a multicopter with some number of failed motors take off just because it doesn't immediately fall out of the sky.
I would agree with you operating a helicopter will always be expensive. I don't anticipate autonomous helicopters to be price competitive with something like cars. People pay the premium for vertical flight to gain the benefits of vertical flight. In addition, I'm pretty sure that given the high fixed price, that market would rather pay more to get a premium product rather than accept an inferior product (e.g. insufficient range, speed, flight time, safety, etc). I was more saying making helicopter flights marginally cheaper will make said flights marginally more available to more people.
There might be game changers down the line. For example, Sikorsky has a lot of experience with experimental control systems like hybrid helicopters that may reduce operational expenses, and if they can prove SARA/derivatives are as reliable or better than a human pilot and convince the general public/unions to fly without a pilot, they might design helicopters that are more maintenance-oriented. But as with all things, it's more important to make sure new innovations are deliverable and provably progressive.
Nah, helicopters are relatively simple machines, literally every person in this site can understand the basics of helicopter technology (of course, getting proficient and enabling a career in heli-tech is a whole different story). The only difficulty is making them bulletproof, but nowadays the crashes due to technical problems are rare.
Also, helicopters are generally used for other purposes than planes. Transporting thousands of men thousands of kilometers is probably not a job for helicopters but that does not imply that helicopters are useless.
Well, we start running one instruction, it just never quite finishes. The other OISC systems run way more than one instruction, they’re just always the same one with different parameters.
Atomic clocks show up on surplus auctions/stores somewhat frequently (they’re called a Caesium frequency standard, e.g. the HP 5071). There’s a few reports of enthusiasts keeping them running at home (http://leapsecond.com/ptti2003/tvb-Amateur-Timekeeping-2003....)
I’m sure HP/Agilent/others will gladly sell you the newer state of the art modes as well (although as high precision, very low volume devices they’ll be fabulously expensive); if an industry consortium wants even more precision you could staff a research team and shave off a bit more frequency error.
However, I think for industry it’s cheaper to use GPS receivers instead of supporting a powerful transmitter (and its use of value able spectrum). GPS can keep incredibly accurate time.
Sure, as long as the satellites are up and you have line of sight.
Aside from hostile actors, space debris, solar storms and other threats exist. We shouldn't take for granted that our satellites are just always going to be there.
It’s not even desktop users; most desktop users download Ubuntu and never touch anything, on reasonably common PC hardware. Kernel regressions mostly get caught in the Ubuntu betas or testing tracks (e.g. Debian Sid).
The typical user the kernel developers focus on here is a kernel developer: always running the latest kernel with a stable user space. I find it extremely narcissistic that they reject security improvements for billions of devices, for what essentially just makes developers lives easier,
But do you want your box to send silently corrupted data for the next two years? Or would you rather reboot every night, and maybe escalate to your red hat support contract, where someone will then fix the underlying bug (for which you now have crashdumps),
That Redhat support contract won't save you from a bug in a binary blob network driver.
Crashing the whole kernel at the drop of a hat seems like a pretty extreme stance to take as a general policy IMHO. Killing and restarting the driver will usually suffice, although some data may be lost and have to be retransmitted.
Fixing all memory corruption bugs is infeasible without fundamentally changing the way Linux is developed. There is so much code (and it’s being added to, changed, etc.) written by humans that make mistakes.
There will always be some bugs that are in between being discovered (by someone, maybe malicious, maybe not), and being fixed. How else do you prevent against vulnerabilities in that stage?
Linus' response is that calling it an infeasible problem is a cop-out. The right way to go about it is to fix them all, incrementally if need be, and not break userland in the process.
These comments sound analogous to real world security and societal issues. Like, the desire to increase army size and addressing the underlying issues.
One is a short term solution, the other long term.
1) failing loudly is better than failing silently. A memory corruption issue (or a bad refcount, etc.) is not a benign issue that only becomes relevant under carefully crafted exploit conditions. You need the carefully crafted exploit to get the system back into an attacker controlled state (I.e. code execution); by itself (with non-malicious inputs, usually something random or slightly atypical — enough to not have been noticed yet, but typical enough that some program does it) the system is likely to either panic immediately (same result as with pax) or to corrupt some memory, in which case you will have a lot of strange behaviour to track down later (users will probably blame them on hardware or on their user space, so you might never see them. for example a recent OSDI paper showed that ext3/4 had several real world data corruption bugs. If these aren’t as frequent as the recent bcache issues, no one notices).
2) When I was doing research projects (into memory defenses on the kernel) about 3 years ago, there was no (commonly used, that I saw) automated testing infrastructure in the kernel. This makes catching regressions, especially in drivers for rare hardware, hard to catch. While tests aren’t a panacea, i think Linux overestimates what fraction of problems Code reviews will catch.
3) the “don’t break user space” strategy is already failing. Every mainstream distribution and embedded vendor stays on an old kernel branch. Big deployments do staged rollouts and extensive burn in tests. This isn’t just because the kernel, but because of extensive abreaking changes everywhere (compilers, standard libraries, etc. all need to change sometimes).the last time this happened, IIRC it was some audio bug in a strange configuration. In my experience, running a non standard Linux audio confit causes countless breakages, so an additional one in the kernel that might save my personal data from being exfiltrated is worth it. Most users have average (and therefore well tested) setups, which means thy won’t see breakages as often.
Perfect software doesn’t exist, and even MSFT backed off maintaining religious backwards compatibility (note that Microsoft’s approach was not to flame at developers and hinder new development, but through extensively building compatibility shims. Often, these came with trade offs strongly in favours or security, e.g. UAC).
Breaking user space is ok; users already expect breakage, and the cost of the additional breakages is low (to users and to society as a whole) compared to the cost of security breaches [citation needed, but Linux kernel security is relied on in a lot of places].
So one of Linus' main points in this series of posts is that failing loudly is actually not always better than failing silently or quietly, and it's really annoying when people come in making that assumption without thinking. This is also something that he is constantly repeating and ranting about, and it's arguably one of the reasons why Linux is so successful.
Think about a smartphone - do most users want it to crash and reboot, even if some error (which could end up being a security issue) occurred? The answer is no, absolutely not. The crashing and rebooting itself isn't really that helpful. Reporting the bug to the Linux developers _would_ be helpful.
Some people do want the frequent crashing behavior and that's okay, but it's not okay to make that decision for everyone.
Also, users might expect minor breakage if someone somewhere makes a mistake, but that doesn't mean it's okay. That's like saying if someone always washes their hands before eating, it's okay if they get sick, because they were expecting that they might get sick.
> Breaking user space is ok; users already expect breakage, and the cost of the additional breakages is low (to users and to society as a whole)
Which is why everyone loves rolling releases so much that Windows 10's forced upgrades are universally praised and Linux Desktop has a dominant market share.
At least in the east bay, one reason why HOV lanes might run more effectively is that they are also express lanes (limited points at which you can merge in and out). On the other lanes, passing on the right/not passing in the left lanes is ok which slows down merging considerably. Also HOV lanes by definition have twice the (passenger) usage per car, so 2 HOV lanes and 4 regular ones are moving the same amount of people.
Car companies (at least European ones) tend to be pretty transparent about this:
- Audi (example): letter (car/SUV/race car)+ number (quality: 1-cheap, 8-luxury)
- BMW: first number is a quality class, two numbers is the Engine size.
- Mercedes: first letter is a quality class, number is the engine.
Other luxury consumer goods use pretty similar schemes, e.g. Audio gear (e.g. bang + olufsen: the higher the number the better, with some number of meaningless zeroes added depending on fashion).
One observation might be that these companies all have halo products that are used to anchor/promote the volume offerings that drive revenue (e.g. The Audi r8 and the a3, the 90k speakers and the 1k set)
> Audi (example): letter (car/SUV/race car)+ number (quality: 1-cheap, 8-luxury) - BMW: first number is a quality class, two numbers is the Engine size. - Mercedes: first letter is a quality class, number is the engine.
You need to already know the brand to know this.
> (quality: 1-cheap, 8-luxury)
How high does it go? If they decide to turn it up to 11 tomorrow, is 8 still "luxury"?
I can't agree that this sort of versioning is any more transparent.
> (e.g. The Audi r8 and the a3, the 90k speakers and the 1k set)
This means nothing unless you already know the brand. There's going to be a minimum set of knowledge required to understand these versioning schemes.
I expect the library is now woefully out of date, and documentation is mostly in the form of conference talk slides:
https://github.com/jbangert/mithril
there’s also https://github.com/aclements/libelfin (parsing only, supports dwarf); https://github.com/bx/elf-bf-tools (Turing machine inside elf relocations) and of course the “olg guard” of ELF reversing tools ERESI/elfsh (website seems down; GitHub mirror on https://github.com/thorkill/eresi).