Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I distribute binaries. My binaries work on six different operating systems. In order to do that I had to roll my own C library. I'm happy I did that since it's so much better than being required to use six different ones.

I'm not opposed to vDSO but I disagree with how Linux maps it into memory by default. Linux should not be putting anything into the address space that the executable does not specify. MMUs are designed to give each process its own address space. Operating systems that violate that assumption are leaky abstractions imposing requirements where they shouldn't.

The main thing dynamic shared objects accomplish is giving upstream dependencies leverage over your software. They have a long history of being mandated by legal requirements such as LGPL and Microsoft EULAs. It's nice to have the freedom to not link the things.



> My binaries work on six different operating systems. In order to do that I had to roll my own C library.

Other people have made software for decades without writing program-specific libc instances. Tell me you at least started with something decent like musl instead of literally writing your own libc from printf on up.

> Linux should not be putting anything into the address space that the executable does not specify

Execution has to start somewhere, and kernels have often reserved parts of the system address space for themselves.

> The main thing dynamic shared objects accomplish is giving upstream dependencies leverage over your software.

Loose binding in interfaces allows systems on both sides of the interface to evolve. If you want 100% complete control over your system for some reason instead of writing programs that play well with others, just ship your thing as a VM image and be done with it.


I used lots of code from Musl, OpenBSD, and FreeBSD. I used Marco Paland's printf. I used Doug Lea's malloc. I used LLVM compiler_rt. I used David Gay's floating point conversion utilities. The list goes on. Then I stitched it all together so it goes faster and runs on all operating systems rather than just Linux. See https://justine.lol/cosmopolitan/index.html and https://github.com/jart/cosmopolitan

Trapping (SYSCALL/INT) is a loose binding. The kernel can evolve all it wants internally. It can introduce new ABIs. Processes are also a loose binding. I think communicating with other tools via pipes and sockets is a fantastic model of cooperation. Same goes for vendoring with static linking. Does that mean I'm going to voluntarily load Linux distro DSOs into my address space? Never again. Programs that do that, won't have a future outside Docker containers.

Also, my executables are VM images. They can boot on metal too. Read https://justine.lol/ape.html and https://github.com/jart/cosmopolitan/issues/20#issuecomment-... Except unlike a Docker distro container, my exes are more on the order of 16kb in size. That's how fat an executable needs to be, in order to run on six different operating systems and boot from bios too.


> Same goes for vendoring with static linking. Does that mean I'm going to voluntarily load Linux distro DSOs into my address space? Programs that do that, won't have a future outside Docker containers.

Strong claim. Wrong, but strong claim.

The completely-statically-linked model you're proposing might be acceptable on servers, but on mobile and embedded devices like Android, it's a showstopper: without zygote pre-initialization and DSO page-sharing, Android apps would each be at least 3MB heavier than they are today and take about 1000ms longer to start --- and a typical Android device has a lot of these processes running.

More broadly, yes, in most contexts, I see a general trend away from elaborate code-sharing schemes and towards "island universe" programs that vendor everything. But these universes need to interact with their host system using a stable ABI somehow, I believe that SYSCALL is fundamentally the wrong layer for this interaction, as it's not flexible enough. For example, the Linux gettimeofday() optimization couldn't have been done without the ability to give Linux programs userspace code to run pre-kernel via the vDSO. How do you propose the kernel do things like vDSO gettimeofday optimizations?


If you think I'm wrong then why don't you tell me what requirements you've faced as a software developer distributing binaries? 99% of developers have never needed to deal with the long tail of corner cases.

Doesn't everything on Android start off with the JVM as a dependency? In that case the freedom to not use DSOs is something that Google has already taken away from you. That's not a platform I'd choose to develop for unless I was being paid to do it.

On x86 RDTSC returns invariant timestamps so you technically don't need shared memory to get nanosecond precision timestamps. XNU does the same thing and they don't call it a DSO. Because that's just shared memory. I have nothing against shared memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: