AFAIU it was just (some versions of?) DOS/4GW that had a 64MB limit. Some other DOS extenders, in particular the open source DOS/32A, allow using the full 4GB virtual address space.
Mostly because of static linking. C and C++ don't put every library they need in the binary by default. The advantage is that a pure Go or Rust binary just works (most of the time) when copied from one machine to another, you don't have to care about installing other libraries.
Another advantage is that at least for Rust you can do whole program optimization. The entire program tree is run through the optimizer resulting in all kinds of optimizations that are otherwise impossible.
The only other kinds of systems that can optimize this way are higher level JIT runtimes like the JVM and CLR. These can treat all code in the VM as a unit and optimize across everything.
> Another advantage is that at least for Rust you can do whole program optimization. The entire program tree is run through the optimizer resulting in all kinds of optimizations that are otherwise impossible.
I get why this might lead to big intermediate files, but why do the final binaries get so big?
Rust binaries + all their dynamic libraries are the same size as C++ binaries + their linked libraries (when stripped, this isn't default in Rust)
The main issue is that Rust binaries typically only link to libc whereas C++ binaries link to everthing under the sun, making the actual executable look tiny because that's not where most of the code lives.
Both C++ and Rust are based on monomorphization, which means generic programming is based on a expansion of code for each combination of types. This makes compilation slow and causes code bloat. One then needs whole program optimization to get this under control to some degree.
Go especially, on some platforms they go straight to syscalls and bypass libc entirely. They even bring their own network stack. It's the maximalist plan 9 philosophy in action.
I don't really like Go as a language, but this decision to skip libc and go directly with syscalls is genius. I wish Rust could do the same. More languages should skip libc. Glibc is the main reason Linux software is binary non-portable between distros (of course not the only reason, but most of the problems come from glibc).
> Glibc is the main reason Linux software is binary non-portable between distros
Linux software is binary portable between distros as long as the binary was compiled using a Glibc version that is either the same or older than the distros you are trying to target. The lack of "portability" is because of symbol versioning so that the library can expose different versions of the same symbol, exactly so that it can preserve backwards compatibility without breaking working programs.
And this is not unique to Glibc, other libraries do the same thing too.
The solution is to build your software in the minimum version of libraries you are supposed to support. Nowadays with docker you can set it up in a matter of minutes (and automate it with a dockerfile) - e.g. you can use -say- Ubuntu 22 to build your program and it'll work in most modern Linux OSes (or at least glibc wont be the problem if it doesn't).
> Linux software is binary portable between distros as long as the binary was compiled using a Glibc version that is either the same or older than the distros you are trying to target.
Well, duh? "Property A is possible if we match all requirements of property A".
Yes, using older distro is the de facto method of resolving this problem. Sometimes it's easy, sometimes it's hard, especially when we want to support older distros and using a new compiler version and fairly fresh large libraries (e.g. Qt). Compiling everything on older distro is possible, but sometimes it's hell.
> And this is not unique to Glibc, other libraries do the same thing too.
This only means that it is a very good idea to drop dependency on glibc if it's feasible.
macOS has a "minimum macos required" option in the compiler. Windows controls this with manifests. It's easy on other systems.
> Yes, using older distro is the de facto method of resolving this problem.
What i describe is different from what you wrote, which is that Linux is not binary compatible between distros. This is wrong because Linux is binary compatible with other Linux distributions just fine. What is not compatible is using a binary compiled using a newer version of some shared libraries (glibc included but not the only one) on a system that has older versions - but it is fine to use a binary compiled with an older version on a system with newer versions, at least as long as the library developers have not broken their ABI (this is a different topic altogether).
The compatibility is not between different distros but between different versions of the same library and what is imposed by the system (assuming the developers keep their ABIs compatible) is that a binary can use shared libraries of the same or older version as the one it was linked at - or more precisely, it can use shared libraries that expose the same or older versions of the symbols that the binary uses.
Framing this as software not being binary portable between different distros is wildly mischaracterizing the situation. I have compiled a binary that links against X11 and OpenGL on a Slackware VM that works on both my openSUSE Tumbleweed and my friend's Debian system without issues - that is a binary that is binary portable against different distros just fine.
Also if you want to use a compiler more recent than the one available in the distro you'll need to install it yourself, just like under Windows - it is not like Windows comes with a compiler out of the box.
Which is why they have already backpedalled on this decision on most platforms. Linux is pretty much the only OS where the syscall ABI can be considered stable.
Yes, Linux is reversed in this aspect -- glibc is not really binary friendly, but kernel syscalls are. On other systems, kernel syscalls are not binary friendly at all, but libc is friendly.
I'm fine with using libc on other systems than Linux, because toolchains on other systems actually support backward compatibility. Not on Linux.
You can skip libc on Windows - you can't skip the system DLLs like kernel32. (In fact, Microsoft provided several mutually incompatible libcs in the past.)
Well, you can non-portably skip kernel32, and use ntdll, but then your program won't work in the next Windows version (same as on any platform really - you can include the topmost API layers in your code, but they won't match the layers underneath of the next version).
But system DLLs are DLLs, so also don't cause your .exe to get bloated.
Yes, it's not literally libc on windows, but the point is that directly calling syscalls is not supported, you have to call through the platform's library for doing so.
On some systems, this is just not a supported configuration (like what you're talking about with Windows) and on some, they go further, and actually try and prevent you from doing so, even in assembly.)
There's still something on the platform that you can call without extra indirection in the way on your side of the handoff. That is true on all platforms; whether it's an INT or SYSCALL instruction or a CALL or JMP instruction is irrelevant.
Kind of like the syscall dispatch table on the Linux kernel side, right? After you issue the handoff instruction and it becomes the operating system's problem, there's still more code before you get to the code that does the thing you wanted.
macOS has a different dev culture than Linux, but you can get pretty close if you install the Homebrew package manager. For running LLMs locally I would recommend Ollama (easy) or llama.cpp. Due to the unified memory, you should be able to run larger models than what you can run on a typical consumer grade GPU, but slower.
Unfortunately someone with advanced dementia does not know if she has eaten or not. Most of the time there will be no eating, unless someone else puts food in your mouth.
I'm sorry for your first-hand experience, but I also need to remind you that 1 first-hand experience does not translate well to the overall population of people experiencing dementia.
Why would they need to know whether they have eaten, or even what eating is? They would feel the sensation of gnawing hunger in their stomach, the same way a baby or animal does.
Would be great if they can release it on YT fully. I doubt anyone buys it today since it is so dated, but would be interesting from a historical perspective.
Influenza vaccination may reduce the risk of MACE and ACS among older adults. Aligned with the World Health Organization guidelines, our findings further support influenza vaccination as an effective public health strategy for potentially reducing cardiovascular disease burden.
Seems to require Win32 (Digital Mars C/C++ Compiler Version 8.57). Is there a version of the C compiler than can run on FreeDOS or MS-DOS ?