Confusingly written article, to the point of being unreadable unless you already know exactly how graphics drivers in Windows work.
"WDDM is a major overhaul that shifts responsibility of managing the GPU away from Win32k and gives better control over the GPU to the driver vendor. Dxgkrnl.sys, the DirectX graphics driver, talks to a miniport driver to provide varying levels of WDDM interfaces."
"Officially starting with Windows 8, every GPU driver for the system had to be a WDDM driver. But all that was really dropped was the miniport driver."
"For WDDM, the communication back to the miniport driver is more direct."
So does the miniport driver exist in modern Windows at all (and is an essential part of how WDDM works), or was it dropped?
The prevailing opinion seems to have shifted towards using the frame pointer for its special purpose, in order to improve debuggability/exception handling?
But the article is really quite misinformed. As you say, many mainframe/mini, and some early microprocessor architectures don't have the concept of a stack pointer register at all, neither do "pure" RISCs.
I'd argue there is a real "C Stockholm Syndrome" though, particularly with the idea of needing to use a single "calling convention". x86 had - and still has - a return instruction that can also remove arguments from the stack. But C couldn't use it, because historically (before ANSI C), every single function could take a variable number of arguments, and even nowadays function prototypes are optional and come from header files that are simply textually #included into the code.
So every function call using this convention had to be followed by "add sp,n" to remove the pushed arguments, instead of performing the same operation as part of the return instruction itself. That's 3 extra bytes for every call that simply wouldn't have to be there if the CPU architecture's features were used properly.
And because operating system and critical libraries "must" be written in C, that's just a fundamental physical law or something you see, and we have to interface with them everywhere in our programs, and it's too complicated to keep track of which of the functions we have to call are using this brain-damaged convention, the entire (non-DOS/Windows) ecosystem decided to standardize on it!
Probably as a result, Intel and AMD even neglected optimizing this instruction. So now it's a legacy feature that you shouldn't use anymore if you want your code to run fast. Even though a normal RET isn't "RISC-like" either, and you can bet it's handled as efficiently as possible.
Obviously x86-64 has more registers now, so most of the time we can get away without pushing anything on the stack. This time it's actually Windows (and UEFI) which has the more braindead calling convention, with every function being required to reserve space on its stack for it's callee's to spill the arguments. Because they might be using varargs and need to access them in memory instead of registers.
And the stack pointer alignment nonsense, that is also there in the SysV ABI. See, C compilers like to emit hundreds of vector instructions instead of a single "rep movsb", since it's a bit faster. Because "everything" is written in C, this removed any incentive to improve this crusty legacy instruction, and even when it finally was improved, the vector instructions were still ahead by a few percent.
To use the fastest vector instructions, everything needs to be aligned to 16 bytes instead of 8. That can be done with a single "and rsp,-16" that you could place in the prologue of any function using these instructions. But because "everything" uses these instructions, why not make it a required part of the calling convention?
So now both SysV and Windows/UEFI mandate that before every call, the stack has to be aligned, so that the call instruction misaligns it, so that the function prologue knows that pushing an odd number of registers (like the frame pointer) will align it again. All to save that single "and rsp,-16" in certain cases.
I've looked at the EXE file, and it's supposed to print "This program cannot be run in DOS mode." when run under (real or emulated) DOS.
The reason why it fails to do that is that the compiler did not set the memory allocation fields in the header correctly, so the stack is either overwriting random memory or may be somewhere that isn't writable at all!
Apparently nobody tests this stub code anymore, might as well leave it out completely...
(maybe Windows even accepts files that start with the PE header instead of MZ?)
IIRC, even GW-BASIC allowed direct access to the hardware via peek/poke/inp/out. When you turned on the PC, it loaded the first 512 byte sector from floppy or hard disk into memory and transferred control to that. In theory, you could use the tools it came with to write a complete replacement for the operating system and install it so that after the BIOS loads that first sector, not a single machine instruction that you haven't written yourself runs.
I'm genuinely asking everyone here: How can I do this on a smartphone or tablet? Not just "root it", or install an "alternative OS" that is really just a tweaked Android, and "also you first have to buy this particular device it works on". Preferably without having to solder SMD components.
But from all I've read, I'm expecting the answer is "you can't". Which is too bad, since I have a couple of old devices from family laying around and would like to tinker with them. I'm not connecting them to any network as long as I don't have that level of control over what they do. Wouldn't do it with a new, "secure" device either -- the problem for me is what the built-in software does when working as intended by Goo666le + Shenzhen (I don't trust Apple either, and their devices seem even less hackable).
It's actually not; there has been a phenomenon that Anthropic themselves observed with Claude in self-interaction studies that they coined 'The “Spiritual Bliss” Attractor State'. It is well covered in section 5 of [0].
>Section 5.5.2: The “Spiritual Bliss” Attractor State
> The consistent gravitation toward consciousness exploration, existential questioning, and spiritual/mystical themes in extended interactions was a remarkably strong and unexpected attractor state for Claude Opus 4 that emerged without intentional training for such behaviors.
I don't see how this constitutes in any way "the AI trying to indicate that it's stuck in a loop". It actually suggests that the training data induced some bias towards existential discussion, which is a completely different explanation for why the AI might be falling back to these conversations as a default.
I think a pretty simple explanation is that the deeper you go into any topic the closer you get to metaphysical questions. Ask why enough and you eventually you get to what is reality, how can we truly know anything, what are we, etc.
It's a fact of life rather than anything particular and about llms
My thinking was that there was an exception handling and the error message was getting muddled into the conversation. But another commenter debunked me.
"Almost completely boiled frog petitions against raising the water temperature another degree"
It's great to see that some more people who were previously complacent are outraged about this move. But let's look back a bit:
In the early 1990s, Linus Torvalds started writing an OS kernel for 386-class PCs. He didn't need the approval of some corporation to allow him to run code on his own machine, or distribute it for others to run on theirs. The code didn't have to run as an "app" in some restricted sandbox under Microsoft's OS (not that back then, DOS or Windows were even in any way locked down the way modern operating systems are). Documentation for all the "standard" hardware like video, keyboard, hard disks, etc. was openly available, so it didn't have to rely on proprietary drivers.
This is how it was at one time, and what should have remained the standard today, but instead it's turned into some utopian dream that those who grew up with "smart" devices can't even conceive as possible anymore.
Google has taken what became of this code, and turned it into an "open" system that is pretty much designed to track every aspect of people's lives in order to more effectively target them with psychological manipulation, which is what advertisements really are. And you're not really getting "free stuff" in return for this invasion either, since pretty much everything you buy includes a hidden "tax" that goes to support this massive industry.
"A supercomputer in everyone's pocket"? Yes, but it's not yours, nor can you even know what it does. Even the source code that is available is millions of lines that you couldn't inspect in all your lifetime. Online 24/7, with GPS tracking your every move and a microphone that listens to what you say. Every URL you visit is logged. Your photos uploaded to "the cloud" and used to train AI.
The only solution is to no longer accept any of this, even if almost everyone else does. Even if it means giving up some convenience.
Apple showed the world that a smartphone that was enjoyable to use was possible.
I know it's hard to remember today, but in 2007, Apple was still the perennially-"beleaguered" underdog whose only big success story in marketshare terms was the iPod.
If the public had not loved the iPhone, it never could have "normalized" anything.
There are definitely aspects of the iPhone that it is fair to criticize Apple for. The rest of the world's wholesale embrace of its design—to the point of slavishly copying it, for several manufacturers at different points in time—can only be blamed on their lack of imagination and willingness to take risks, and on the public's unwillingness to give up the benefits of the iPhone just to get the much-less-obvious benefits of something more "open" or different.
Not sure if it's 100% slop, but as someone knowledgeable about older x86 processors, I can say after a casual look through "src/emulator/cpu.c" that the code is pretty terrible and often wrong.
For example, "subtract with carry" simply adds the carry to the second operand before doing the subtraction, which will wrap around to zero if it's 0xffff; this doesn't affect the result, but the carry out will be incorrectly cleared. Shift with a count of zero updates the flags, which it doesn't on real hardware. Probably many more subtle bugs with the flags as well.
It can't really be called a 286 emulator either, because it only runs in real mode! For some reason there is already code in there to handle the 32-bit addressing of the 386, but it can't have been written by someone who actually understands how the address size override works - it changes both the size and the encoding of the address register(s), "AX+BX" is never a thing, nor is "EBX+ESI" (etc) used if there is only an operand size override. Also what looks like a (human?) copy-paste mistake with "EBX" in a place where it should be "EAX". At least all that code is #ifdef'd out.
And rather than running a real BIOS, it appears to handle its functions in the INT emulation code, but what is there looks too incomplete to support any real software.
There is emulation code for the PIC (interrupt controller) and PIT (timer) + various other stuff. The intcall86() filters out and handles INT 10h/13h/etc but everything else is emulated "for real": ip/cs/flags are pushed, new cs:ip set, etc.
There is an 8KB BIOS of some sort defined as a big array which should handle boot and hardware interrupts.
> Probably many more subtle bugs with the flags as well.
Sure. And lots of subtle bugs in general. Which is fine for someone's personal fun side project.
Yeah, it's really irresponsible how Pascal sacrifices such safety features in the name of faster and more compact code... oh, wait, the compiler stops you from calling a function with incorrect parameters? Bah, quiche eaters!
"WDDM is a major overhaul that shifts responsibility of managing the GPU away from Win32k and gives better control over the GPU to the driver vendor. Dxgkrnl.sys, the DirectX graphics driver, talks to a miniport driver to provide varying levels of WDDM interfaces."
"Officially starting with Windows 8, every GPU driver for the system had to be a WDDM driver. But all that was really dropped was the miniport driver."
"For WDDM, the communication back to the miniport driver is more direct."
So does the miniport driver exist in modern Windows at all (and is an essential part of how WDDM works), or was it dropped?