Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are downsides. Unsure if significant vs negligible. And same in terms of “internal” protocols - that essentially goes against the modularity (and while in the past there were good reasons to get away from modularity in pursuit of performance, darn, baudline.com of 2010 works amazingly well and is still in my toolbox!)

Big advantage of the “old ways” was the cohesion of software versions within a heterogenous cluster. In a way I caught the tail end of that with phasing out of MIT Athena (which at the time was very heterogeneous on the OS and architecture side) - but the question is, well, why.

Our industry is essentially a giant loop of centralizing and decentralizing, with advantages in both, and propagation delays between “new ideas” and implementation. Nothing new, all the economy is necessarily cyclic so why not this.

I’d argue that in the era of inexpensive hardware (again) and heterogenous edge compute, being able to run a single binary across all possible systems will again be advantageous for distribution. Some of that is the good old cosmopolitan libc, some of that is just a broad failure of /all/ end-point OS (which will brood its own consequences) - Windows 11, OSX, Androids etc..





I have no idea what you're trying to say. Are the "old ways" you're referring to having multiple ABIs on one system, like 32-bit and 64-bit x86? Were software versions within a heterogenous cluster more cohesive when we had 32-bit and 64-bit on the same machine..? What?

SGI IRIX and HP-UX handled multiple ABIs from one root partition, with the dynamic linker loader using appropriate paths for various userlands.

This had the advantage that one, networked root filesystem could boot both M68K and PA-RISC, or both o32 and n64 MIPS ABIs, and I’m pretty sure this would’ve worked happily on IA64 (again, from the same FS!)

The notion of “local storage boot” was relatively new and expensive in the Unix-land; single-user computing was alien, everyone was thin-clienting in. And it was trivial to create a boot server in which 32bit and 64bit and even other-arch (!) software versions were in perfect sync.

Nothing in current Linux actually forbids that. With qemu binfmt you can easily have one host and multiple userland architectures; and it sometimes even works OK for direct kernel syscalls.

All essentially aiming for a very different world, one that still runs behind the scenes in many places. The current Linux is dominated both by the “portable single-user desktop” workloads (laptops), and by essentially servers running JIT-interpreted language fast time to market startups. Which pushed the pendulum in the direction of VMs, containerization and essentially ephemeral OS. That’s fine for the stated usecase, but there are /still/ tons of old usecases of POS terminals actually using up a console driver off a (maybe emulated) old Unix. And a viable migration path for many of those might well be multi-endian (but often indeed emulated) something.

Even early Windows NT handled multi-architecture binaries and could’ve run fat binaries! We only settled on x86 in mid 1990s!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: