Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel mulls cutting 16 and 32-bit support, booting straight into 64-bit mode (theregister.com)
167 points by rwmj on May 25, 2023 | hide | past | favorite | 170 comments


"Since its introduction over 20 years ago, the Intel® 64 architecture became the dominant operating mode."

This is how Intel takes credit for AMD's x86-64.


I still shake my head at how they were able to successfully rebrand it from amd64 to x86-64; yet ia32 is still semi-accepted as the "official" name of x86.


> I still shake my head at how they were able to successfully rebrand it from amd64 to x86-64

It's the opposite: the original name was x86-64, and amd64 is a later rebranding. See, for instance, the original web site for this (then) new architecture: https://web.archive.org/web/20000829042251/http://www.x86-64...


Without going through various repos, I'm struggling to recall ever even seeing "x86-64" - It's always amd64 that I look for - so the rebrand wasn't that successful.


I think both are pretty common. E.g. uname -a says x86_64 and GCC/Clang use x86-64 for defining the target while, on the other hand, MSVC (and a lot of Microsoft stuff in general) uses amd64 as do many BSDs.


> MSVC (and a lot of Microsoft stuff in general) uses amd64 as do many BSDs.

That's partly because Microsoft had support for ia32 (x86) and ia64 (Itanium). So amd64 was to differentiate the architecture (and it was unclear Intel would ever extend x86).


Debian-based systems are slightly confused with dpkg architecture returning amd64 and uname returning x86_64.


Compiler explorer (godbolt.org) uses x86-64, as do the man pages for gcc and clang. I suspect there are plenty of other places too.


I remember it a fair bit right after 64 bit became mainstream. In the excitement phase it was generally amd64 then as it became fully established and adopted it was most often referred to as x86-64, at least in the press.


But my filesystem is full of references like /usr/lib/x86_64-linux-gnu


It makes it clearly different from arm64 at least, which may be part of it.


AMD64 is specific to AMD, then Intel released EM64T. x86-64 is the common subset between the two.


It was IA-32e, terrible name


amd64 was an attempt to "brand" the 64-bit version of x86, just as ia32 was an attempt to "brand" the 32-bit version of x86. Many people rejected both and just say x86 and x86-64 as generic names.


> amd64 was an attempt to "brand" the 64-bit version of x86

The "64-bit version of x86" is actually the new ISA created by AMD that was backwards compatible with x86 and added a 64-bit mode.

I'm sure that you agree that the company that creates a ISA has the right to name it.


They have every right to say what they'd rather it be called, and others have every right to ignore them and use a generic name if they prefer.


I get the opposite impression: amd64 is something I see quite often, while ia32 is pretty much exclusively referred to as x86. (Even though for the average user, ia32 would be much easier to recognize as "32-bit", and amd64 as "64-bit", both of which are often found on download buttons.)


Don’t forget arm64 which is now appearing.


Yup, I've misread and confused the two too many times


x86-64 was the original name by AMD, they rebranded it shortly before commercial availability to AMD64. Kernels and compilers had already been updated at that time and just kept the "old" name.


I’m not sure they’ve been successful. The string amd64 is everywhere.


Earlier today I used `arch -x86_64` on my mac to run something in Rosetta (Intel mode)


amd64 looks much too similar to arm64 (though that's the latter's fault)


arm64's official name is AArch64


I don't think that's the case anymore. at the very least they are interchangeable names, and I believe that "arm64" is the name going forward.

I read this somewhere a couple weeks back, but I don't remember the source.


https://developer.arm.com/architectures mentions AArch64, no mention of arm64. There's "A64" for the instruction set, and https://developer.arm.com/Architectures/A64%20Instruction%20... also mentions the various armv8-* names, but still no mention of arm64.


there'a also `aarch64`


My understanding is that the cross licensing agreement with AMD allows them to license the technology and brand it as their own.


Thank Microsoft for holding up 64-bit Windows until Intel had a chip.


Windows XP x64 Edition was released for AMD64 around 2003-2004 before Intel’s x64 chips were out, the only CPUs were Athlon64.


I recall using XP x64 edition on an Athlon64. I remember it having such poor software support that it was barely usable. I heard it was actually a version of Windows Server that they put the XP GUI on top of, but I'm not sure if that's true (Wikipedia only seems to partially corroborate it).


>I heard it was actually a version of Windows Server that they put the XP GUI on top of

In a way, yes. It used the kernel from Server 2003.[1] Server/client kernels were unified for Vista SP1 and Server 2008, and not sooner just because the 2008 release lagged behind Vista's debut.

Prior to Vista, this kernel difference meant drivers for Windows server and client often had to be different builds. An anecdote that I recall was that although xp64 and 2003x64 could share drivers, it was typical to see drivers "not provided" for xp64, so the people who wanted that kernel just ran Windows Server as a desktop OS instead.

1: https://en.wikipedia.org/wiki/Windows_XP_Professional_x64_Ed...


I ran Windows XP x64 on my personal desktop until 2013. It was solid and dependable for me, though I did heavily customise it (custom Visual Styles themes, disabling features I didn’t like, some hacks to bring-forward sone features from Windows 7, etc)


Windows Server and Client were always much closer than appeared at first glance, as long as you lined up the versions. So it almost certainly was as described.


First 64 bit windows was running on DEC Alpha. Not Intel or AMD


Hi. Article author here.

It was but not when you think... 64-bit NT was never released. NT on Alpha was 32-bit.

It only leaked this month. I wrote about it:

https://www.theregister.com/2023/05/19/first_64bit_windows/


That’s true, but most Microsoft software was never ported to Alpha. Or Itanium.

So, you could get a 64-bit computer running Windows, but you wouldn’t be able to play Pinball, or read your emails, or run some other software you needed to work.

Oddly enough, the situation was much better with 64-bit Unix, where your usual tools worked flawlessly.


One of the unsung advantages of having the source available; you just need a compiler and some libraries and suddenly you have a ton of software even in an entirely new architecture.


Not all SW was easily to port from 32 to 64 bits. There are still open source programs which will not compile and run on 64 bits architectures.


This is one interesting reason why porting to different platforms (word size, pointer size, endianness) helps find bugs.

Also, that’s why we all should write unit tests.


By contrast, Apple in 2006 launched its first Intel Macs with 32-bit Core CPUs. Although new models with 64 bit-capable CPUs replaced them almost immediately (the June 2006 Macbook only lasted until November), that one decision caused a decade of software- and OS-related grief for Apple and its developers.


They were doing Windows NT for the Itanium. They were holding out hoping that this thing would take off.


I remember they didn’t port a lot of stuff to it - it was running mostly the BackOffice family of server apps.

I don’t think one could run Outlook on them.


When I was at Microsoft during the time I had X86, Alpha, MIPS, PPC (for a short time) and Itanium. We only did the builds for Itanium once in a while because the developers and the testers shared the expensive hardware. I believe the only versions of the product (Systems Management Server) were for the X86, Alpha and MIPS.


It is also about removing some I/O instructions entirely and removing all others from ring 3 ("userspace"). Removing 16-bit addressing support frees up the 67h address size override prefix. They also suggest having CS/DS/ES/SS segment override prefixes do the same as in 64-bit mode: nothing, which should allow for repurposing those 4 entries in the primary opcode map some years down the line.

They are suggesting freeing up 5 bytes in primary opcode map in all modes (4 for string I/O, 1 for 67h address size override) plus 8 bytes in ring 3 (userspace) plus making it possible to free up 4 more (segment override prefixes). That means 13 (17) free bytes for userspace!

I think part of the plan is to enable more compact encoding of certain existing instructions (and allow for compact encoding of future instructions).

I think they want to do that to enable wider instruction decoders -- more compact encodings => more instructions per 16-byte/24-byte/32-byte/whatever size window the decoders look at. Wide decoders are inherently harder for x86 than for architectures like ARM because there are so many possible instruction lengths (all values between 1 and 15 inclusive), so they need all the juice they can squeeze out here.

I am surprised they didn't suggest getting rid of the BOUND instruction (which is used for the EVEX prefix in 64-bit mode).

And what about HLT? It isn't allowed in ring 3, anyway, so maybe they are looking at repurposing that as well?

And INT1, too?


From what I can tell, they're not actually planning on reclaiming the opcode space. Instead, they're mapping those things to #UD, which allows a hypervisor to emulate those instructions for 32-bit guest OSes.


For now, yes.


Wider decoders seems like the most plausible motivation for this, yes.


It still blows my mind that a 64-bit i9-13900K with 24 cores, 36 MB of cache, and a turbo speed of 5.8 GHz has to pretend to be a 16-bit 8086, which had a clock speed of 5-10 MHz and was discontinued in 1998, just to properly boot itself.

(An exaggerated oversimplification I know, but still)


Yeah but who, other than the person responsible for the firmware that boots the system, cares? These simplifications will have zero observable benefits to you. They won’t save a lot of die space or cost.

The complexity of the far past is so small that it is inconsequential today. The PS1 was implemented in spare die area of the PS2’s USB controller. Time marches on and die space becomes free.


The OS cares, sadly. When the OS, running one one CPU, instructs another CPU to start (which happens — your system boots into your OS and also resumes from suspend with only one CPU running OS code), it sends a message called SIPI to that CPU. The CPU responds by running OS code, in real mode, at an address <1MB. So the OS needs to find valid memory below 640kB that doesn’t conflict with any other horrible legacy use.

Also, the “LA57” 5-level page table design has a little whoopsie: you can’t switch between 4 and 5 level page tables without exiting paged mode entirely.


While all true, obviating all this has no material impact on the user.


I care. I want to learn the entire thing I'm using, which means I appreciate things that are simpler and faster to learn. You can say I am a stupid or irrational person not worthy of consideration for wanting this but I am who I am and I want to spend money on computers that cater to me.


It very similar to chip die space - modern technologies are so complex that to learn how 8086 works is a very small effort compare to learning how the latest intel CPUs work. And emulation of older generations is a small part of this complexity.


I'd argue that 16-bit mode is conceptually more complicated than modern 64-bit modes, at least for userspace. 64-bit mode has a single flat address space (FS/GS are just opaque offsets into this address space), memory-mapped I/O, and a single system call trap instruction. 16-bit mode has segmentation, I/O ports, multiple trap instructions (and call gates!), address space wraparound...


Learning useful things is fun, learning useless things is unpleasant, so in some sense it's more effort.


While working on not always pleasant stuff at paid job as a hobby I sometimes learn 'useless' stuff from the past. For me on average old technologies looks simpler (though not without quirks) and more interesting to learn while modern ones often look over-complicated.

Just to clarify - I don't think that Intel should retain 16bit mode, but don't think removing it would make a big difference.


When people say "who cares?" they generally mean "what non-vanishingly-small percentage of people care?" and I think that's still an unanswered question!


Then you are obviously running OpenBSD on ARM.


Do you want a computer architecture that replaces and obsoletes everything you've previously learned or one that evolves and adds new features on top?


Either depending on the situation.


In that case you should probably not look at x86 at all. It's one of the most convoluted architectures out there, especially because it lugs all of that legacy stuff along.


> The PS1 was implemented in spare die area of the PS2’s USB controller.

This really confused me until figured out you weren't talking about the IBM PS/1 and PS/2.


I don't think they are. IBM PS/2 was more than a decade before USB.

But then, the PS2 console didn't have USB either, so I'm not sure what they're talking about.


> The PlayStation 2 also features two USB ports

https://en.wikipedia.org/wiki/PlayStation_2


PS2 had 2 USB ports on the front, did it not?


Yeh and the original had a FireWire port too!


You're right. I guess I never noticed because they didn't use it for the controllers.


True that it's invisible to end-users, but it's still a pain in the ass to people writing OSes. Lowering the barrier to entry even a little bit will be appreciated by hobbyists.


From a hobbies point of view I'd actually argue the opposite. The nice thing about the old stuff (Ex. the 8259 PIC support they're removing here) is that it's much simpler to get going. They have very large limitations, but those don't matter when just starting, and they serve as a good stepping stone and learning opportunity before eventually moving on to the modern and more complicated stuff. Additionally if you're using a bootloader then the bulk of this is already solved for you anyway.


The modern stuff is only more complicated because it had to be built around and on top of dozens of dead civilisations under it.


This is the simplistic explanation.

I think that besides that, some people are paid for complexity. That would justify the evolution of SW in the last 10 years.


How so? The bootloader already runs in long mode with UEFI and it also takes care of bringing up other cores so this is not really a problem in my experience.


UEFI doesn't really take care of bringing up the other cores. There's a way to get access to them, but you have to give them back to UEFI before exiting boot services, so that doesn't really help your kernel. You still need to do the SIPI into x86-16 code to really take control of the AP cores.

Even the new way (that isn't actually implemented AFAIK) just SIPIs into long mode code, it doesn't use the UEFI multicore stuff.


Linux still needs to use a Startup IPI to bring up other CPUs, which starts them running in 16-bit mode at an address under 1MB.

Having a fully 64-bit way to bring up other CPUs would simplify that.


> The PS1 was implemented in spare die area of the PS2’s USB controller

Whoah, that is WILD. Any info about where I can learn more about this?


It sounds more like the I/O firmware was running on dedicated PS1 hardware:

https://www.theguardian.com/technology/2013/dec/12/ps4-and-x...

> The PlayStation 2, meanwhile, had the original PlayStation chipset built in, so it ran pretty much any PSone title – and when that chip wasn't being used for backwards compatibility it doubled as an input/output processor, which was pretty canny.



https://www.copetti.org/writings/consoles/playstation-2/#io

Essentially the thicc PS2s include the PS1 chip and normally use it for IO and also for PS1 compatibility.


I can't find anything claiming this. This is literally the first time I have ever seen this claim.

The original fat PS2 including majority of the PS1 hardware physically in the machine. More and more of it got offloaded to pure software emulation with later model revisions.

I, too, would like to see info on this supposed "spare die space on the USB controller" claim. Because it would be really neat if true lol


IIRC, when you boot linux on a mainframe, you load your kernel and initrd into a virtual reader device so the machine can load it in 80-column punchcard sized chunks.


Fun to think that it’s more or less how zVM starts stuff. It took me a while to mentally accept the concept of VM that zVM (in reality, I was playing with VM/370) has, which is not like anything non-mainframe users would be familiar with.

But you can also run Linux directly on an LPAR under the thin hypervisor (which I forgot the name) that runs zVM.

The indignity of the mainframe is that it’s booted under control of an x86 machine that itself boots up as an 8086. At least these days the service elements run Linux.


> But you can also run Linux directly on an LPAR under the thin hypervisor (which I forgot the name) that runs zVM.

The name is PR/SM - which, from what I’ve heard, actually started life as a modified version of VM

> The indignity of the mainframe is that it’s booted under control of an x86 machine that itself boots up as an 8086. At least these days the service elements run Linux.

Poor OS/2, no more booting mainframes for it anymore, unless they are very old ones


> Poor OS/2, no more booting mainframes for it anymore, unless they are very old ones

At least it's not Windows.


And on the PDP-10, it contains a virtual robot to toggle the virtual switches on the virtual PDP-11, to load a loader that runs the virtual tape drive to load the startup program for the PDP-10.

j/k I have no idea how linux boots on Z.


This actually isn't too far from the truth. The VAX 8800 series of minicomputers used a cut-down PDP-11 to boot it, and to act as a console.

https://en.wikipedia.org/wiki/VAX_8000#VAX_Console


It's even worse than that. If it was just the CPU...

It also simulate an old keyboard controller for the purpose of enabling the A20 gate which enables an address line.

https://en.m.wikipedia.org/wiki/A20_line


The A20 gate was deprecated (and made software-based) in Nehalem, then was removed completely in Haswell. It's never been a feature of the Zen µarch(s).


Support for the A20 gate was actually already dropped as per the article you linked:

> Intel no longer supports the A20 gate, starting with Haswell.


It was not needed anymore in Haswell, but the support was still there. It isn't supported at all anymore.


Emulation of the keyboard controller is done by the firmware. The processor's support for A20 was tied to port 92h.


The A20 line was famously used to hack the original XBox. By clearing A20, you prevent the XBox from booting from the hidden ROM, and it boots from regular flash ROM instead.

Some great videos about that on YouTube if you want to look.


Really it is a vast oversimplication. It's only what the system is made to look like it's doing, it's actually doing something completely different.

https://mjg59.dreamwidth.org/66109.html


Slightly off topic but it blows my mind that the 13900K only supports 2 memory channels. My 10 year old i7 supported 4 (with 8 slots)!


Partly to simplify design and reduce cost, partly for market segmentation.


this is only kind of true. the 13900k supports 2 channels but for ddr5, each channel ends up being split into two smaller channels.


I guess people got used to the 286 rebooting itself to access high memory and have allowed these sorts of things ever since.


I used to rant over this but I kinda like vestigial retrocompatibility of that sort.


What are the downsides of this?


Slow.


For those wondering about the impact of removing 32-bit support:

> It will still be possible to start up an x86-32 operating system inside a VM – these have to emulate system firmware in any case, alongside the emulated graphics cards, network cards and so on that they must provide.

> You will also still be able to run x86-32 binaries and apps in ring three on your 64-bit OS in ring zero – so long as the operating system provides the appropriate libraries and APIs, of course.


I'm sure those options will be dog slow and unreliable.

I buy wintel so I can run my software from 15 years ago without interference. Literally the only reason I don't buy a mac. If they want to take that away, well, good luck to them


Considering Intel is merely considering it at this stage means they're probably still several years away from actually releasing chips without 16/32-bit support.

By that time, CPUs will be faster, and Intel will have had plenty of time to figure out the emulation layer.

These days, how much performance-critical software is even being built in 32-bit?


> These days, how much performance-critical software is even being built in 32-bit?

I think the better question is how much performance critical software built in 32 bit is still relied on, and may potentially outlive hardware with 32 bit support


If it's old, even with emulation it'd have a good chance to be as fast or faster than on its original target chips. So you'd need to not only rely on it, but also be demanding more from it over time. And given that you're running 32-bit software in a 32-bit vm on a 64-bit cpu in the first place (in a scenario where you're spending significant amounts of time in ring 0!), you clearly do not really care about performance anyway.


Everything is performance-critical. I buy new hardware so it runs my software as fast as it can. Why would I upgrade my CPU if I knew my programs would start running slower?


I completely agree. I still run Windows 7 on most of my systems (and please ... not interested in hearing any "bot farm" or other specious bullshit about how my machines have been Taken Over By Hackers!), and almost all of the very high quality software I use is 32-bit, because the 64-bit versions either don't exist or are burning dumpster fires.

Example? Lately I've been doing a lot of video transcoding with ffmpeg. The latest 64-bit version running under Windows 11 is lucky to average 10 to 14 frames per second. The last 32-bit version of ffmpeg running on Windows 7 does 64 frames per second or better.

In every case that comes to mind, the 32-bit versions of the applications I use far outperform the 64-bit versions. The only exception I can think of at the moment is Notepad++. NP++ is just plain awesome as a 64-bit application!


Intel gets a lot of criticism for its management, and Apple get a lot of credit for the M1, but it's worth keeping in mind the tradeoffs/sacrifices for backward compatibility. I'm guessing Intel could create something radically more power efficient and performant if they gave up on decades worth of drop-in compatibility.


> create something radically more power efficient

Lots of the M1's efficiency comes from business decisions. Apple chooses to use an expensive, low clock TSMC node. They use a lot of expensive die area on wide cores and cache, so they can keep clocks and voltages (relatively) low. And they use packaged memory for efficiency, but at higher cost and with no modularity.

Not saying it isn't a great design, but Intel/AMD could make a far more efficient CPU if they had the right market incentives to try. We have already seen a hint of this with Van Gogh (the Steam Deck chip) and their rumored "premium" laptop chips with big GPUs and an M1 Pro-like memory bus.


A not insignificant part comes from the fact that they can better parallelize instruction decoding because ARM is simpler and more regular than x86. x86's compressed variable length representation hurts it here. They try to make up for it but there's meaningful benefit from Apple's approach. The other bit is that ARM concurrency is better aligned with how HW wants to work/better able to optimize performance than x64 is (ability for acquire/release semantics vs sequential)

That being said, it's not like we're talking about deprecating that part of the instruction set which they really should do (i.e. build your application for x64.risc which tells the CPU the instruction set is going to be an alternate fixed-representation x64). Could maybe even have a special instruction where you can switch out of it temporarily so that you have back-compat with normal assembly to give software writers time to port.

I'm all in favor of this though. Getting rid of cruft that's more than 20 years old is well beyond time.


I don't know enough technically to comment on, but am curious how this plays into the mobile gaming device SOC/CPUs from AMD. I can say, really like my M1 air, generally good enough battery life and great i/o responsiveness in general. Of course, then I price out a M2 mini, and once I add enough storage/ram it's no longer feels worth the price. Why they even sell a model with under 512gb storage or 16gb memory other than to have a lower "starting from", and even getting to 1tb disk and 16gb or 32gb memory already feels like way too much of a price hike.


> mobile gaming device SOC/CPUs from AMD

There are none! Van Gogh (in the Steam Deck) was the first and last one!

AMD had a whole family of low power (~9W), graphics heavy chips on their roadmap... And when the time for the first one came, not a single laptop maker picked it up. So AMD seemingly canceled the line, but Valve swooped in and used the only survivor in their handheld.

What you see in the ROG Zephyrus and such are rebranded high power laptop chips, which is why they suck so much battery compared to the Steam Deck (even though the Deck chip is much older).


> I'm guessing Intel could create something radically more power efficient and performant if they gave up on decades worth of drop-in compatibility.

Like the success of IA-64?


I think Itanium was a success for Intel in some respects. It effectively killed off PA-RISC, Alpha, and MIPS (outside of network hardware).


That's fair. On the other hand, you can still run x86_64 applications on Apple Silicon through Rosetta. Sure, it's not 32-bit x86, but there are ways to provide backwards compatibility without supporting decades of legacy in hardware.


That seems very unlikely. How much overhead do you believe there to be? Intel is about equal to apple silicon in performance and work per joule and per watt. If either of them could do much better, they would.


A lot of this has to do with keeping Microsoft happy, probably.

Once MS skips to ARM, let's see what Intel does.


>Once MS skips to ARM

MS has tried at least twice move to ARM and failed woefully - Windows RT, Windows 10 on ARM


I think their mistake wasn't necessarily with ARM, so much as an exclusive contract with Qualcomm iirc.


Here's more

They made the marketing blunder of calling the ARM OS "windows" when it couldn't run existing software. It's borderline fraud. Many people, (probably most) returned their RT devices to the store because of this.

Windows ARM laptops were more expensive, slower, & under-speced than x86 ones.

Microsoft restricted software download to store only - pretty dumb to me.

x86 Emulation was eventually released after win10 arm was released but it was slow and probably wasn't reliable.

Summary, high price but low performance, dumb os restrictions, & non existent ecosystem killed ms ARM efforts.


The performance issues weren't locked in by their exclusive deal with Qualcomm?

I mean, sure there were other issues, they can call it "Windows" all they like, just like switching mac from PPC to Intel to ARM is still MacOS/OSX. Even if not everything is 100% compatible over time. That they made other dumb decisions doesn't make it less so.

Much like ARM on Linux doesn't run everything out of the box, doesn't make it not Linux.


They were somewhat successful with WinCE from the late '90s to the early '00s.


In my mind I've always been curious what Intel could do with fixed width instructions with a newer 64bit ISA.



There are ancient 32-bit things that still run on modern operating systems, that you simply can't get a 64-bit version of. On my Linux machine, one specific such thing is the Brother printer driver. Brother laser printers are awesome. Cheap, reliable, inexpensive aftermarket toner etc. But to make them work (in my experience, and certainly for the scanner function) you need their binary driver and it has not (last I looked) been updated in years, and it's 32-bit. No problem, install the 32 bit compatibility libraries and go.

On a pure 64 bit CPU, unless the 32-bit compatiblity problem is solved with emulation at the OS level, all this old 32-bit stuff is history. Or does "cutting 32-bit mode" not mean that, i.e. 32-bit binaries still run in the 64 bit mode?


> On a pure 64 bit CPU, unless the 32-bit compatiblity problem is solved with emulation at the OS level, all this old 32-bit stuff is history.

And you've already answered your own question. Virtualization and emulation is precisely how this will be cared for. Heck, in a lot of cases, that's already how old 16- and 32-bit software is run, as you also need associated compatible operating systems, not to mention clock timings and so forth in the case of games, and it's a lot easier and more accurate to just fire up DOS or Windows 3.11 in a VM than to run the code natively with OS-level compatibility.


32 bit windows software is not emulated, you call a 32-bit API function and it forwards to 64-bit mode at the system call level. Basically instead of SYSENTER you proceed back to 64-bit mode to perform the system call.


The article says they are dropping support for 32 bit OS's but not 32 bit programs, essentially. You'll also still be able to emulate 32 bit OS's. they just can't own the hardware at boot.


Intel's press release has a chart explaining that 32-bit compatibility will still be possible in Ring 3, but not in Ring 0. Not sure what this entails for 32-bit kernel drivers since Linux uses a mix of Ring 0 and Ring 3. But I assume this means that 32-bit user-mode applications and some drivers will still work.


64-bit Linux does not have 32-bit kernel modules.


Printer drivers are in user space. 32-bit user programs remain.


From another comment:

> You will also still be able to run x86-32 binaries and apps in ring three on your 64-bit OS in ring zero – so long as the operating system provides the appropriate libraries and APIs, of course.


Time to RE.


The traditional approach to proprietary printer drivers is to found a software movement that reimplements the entire operating system from scratch, and invent licenses that prevent distributors from keeping their changes to the system secret.

This approach has been somewhat successful.


That's fine, but for me, it stops being x86 if it can't run legacy code.

A 64-bit only CPU may be a sensible thing, and we can use emulators for old stuff, but when we are at it, do we really need x86 at all? If we are to break compatibility, couldn't we just switch to ARM, RISC-V, or something else? I mean, that's what Apple do, they did it successfully, and more than once (68k, PPC, x86, ARM), but Apple is Apple, and x86 is (was?) all about backwards compatibility.



Hi. Article author here.

Yes, that is the original, but I tried to add a lot of historical context and precedent. Whether that is interesting or worthwhile I leave to the readers, but my story seems to be doing well and getting lots of comments and shares, so I guess I succeeded.


A reference to the current context (e.g. RISC-V exists) wouldn't have hurt.



True but that was TH's story. This is a different one which I began writing before TH had published.


Intel should gut as much cruft as possible, 16+32 bit support is cruft for anything with 64 bit, and also double the general purpose register set with another REX prefix trick. Indeed, AMD should have done this back in 1999.


> Intel should gut as much cruft as possible

Gutting 32-bit support will kill off a lot of legacy applications. Apple could only do it because there aren't that many enterprise applications and they provided reasonably well working tooling to ease the effort.


The current proposal seems to keep 32 bit support for applications (running in Ring 3), just removing support for having 32 bit operating systems (in Ring 0).


Wouldn't that kill any hardware that only has 32 bit drivers as well?


Seeing as Windows 11 joins Mac OS in being 64 bit only (at an OS release / kernel level), the hardware is presumably already effectively "dead" in terms of being able to run on current operating systems anyway.


Plus all cards with 32-bit option ROMs.


My understanding is that this kills 32-bit OS support. Eg, you can't run a 32-bit Linux / Windows / BSD distro on it. However, you can still run legacy 32-bit applications in your 64-bit OS.


> Gutting 32-bit support will kill off a lot of legacy applications.

Nah, they can be emulated. Just as Apple continues to support x86 apps on ARM, and supported PowerPC apps on Intel, and m68k apps on PowerPC.


There is a huge amount of Windows software that's still 32-bit (or at least, installs into `Program Files (x86)`), so unless Microsoft is going to start handling that kind of emulation natively in their OS (which they may decide to do), it's going to break a huge amount of stuff.

Removing 16-bit and 32-bit OS support, on the other hand, makes perfect sense.


It works both ways. Axing 32-bit support is going to make it impossible to use a large collection of software. On the other hand, supporting legacy architectures, legacy APIs, etc. for decades also makes developers and users lazy. There needs to be some balance.


Yes, it would have to be done at the OS level. I don't see that as a blocker.


>>so unless Microsoft is going to start handling that kind of emulation natively

They already do, and have since the x64 version of XP.

Its called WoW64.


WoW64 just handles DLL loading/memory mapping/kernel interfaces. It doesn’t do binary translation of the user code at all. Windows x86 on Windows ARM does binary translation.


from MSDN or Microsoft Learn (w/e):

"WOW64 is the x86 emulator that allows 32-bit Windows-based applications to run seamlessly on 64-bit Windows"

Sounds like emulation to me.


Thunking is not emulation.

Thunking has been widespread since OS/2 1 in the mid-1980s.

https://en.wikipedia.org/wiki/Win32s

The original wikipedia article on Thunks has been generalized into uselessness. :-(


wow64 is just windows equivalent of linux /lib32/*


Killing off 32-bit x86 support broke many games on Apple devices.

A friend of mine loved playing a specific game on her Mac that broke, and she was pissed about it for years. She later built a Windows gaming PC.


I'm not saying Apple executed perfectly, I'm saying Apple demonstrates how you can simply and easily emulate older systems at the OS level and that we shouldn't block hardware development on this. If she wants to play an old game there are a ton of options.


Clearly Apple does not demonstrate how you can "simply and easily" emulate older systems, given that they broke a bunch of games that were merely 5-6 years old at the time. They wouldn't have broken compatibility if it was "simple" or "easy".

The people that actually demonstrate this to any extent are Valve, Microsoft and the people working on Fex. Definitely not Apple, and it's definitely not simple, But it's still worth doing.

The fact is that if everyone did what Apple did, she wouldn't be able to play the game on modern PCs any more. I think that is not an acceptable outcome: for archival purposes if nothing else, it is important that every game ever made be playable forever.

Deprecating 32-bit support means that macOS is never going to be a serious gaming platform again. Many games are done and never see updates again. At least Asahi is going to support older games so you can still use the hardware.


You seem to be talking past me. The thing they did implement, they implemented extremely well (x64 translation on ARM). In fact they even went so far as to implement changes to the physical silicon to improve emulation performance (implementing a runtime toggle to enable total store ordering).

They knocked it out of the park. In some cases emulated Intel apps on Apple Silicon run faster than they did natively on Intel Apple hardware. You are criticizing them for not doing what you wanted which was 32-bit emulation. This is not the same thing as recognizing them for the incredibly good 64-bit emulation they achieved.

> Deprecating 32-bit support means that macOS is never going to be a serious gaming platform again.

I don't think it's been a 'serious gaming platform' since, er, ever.

That's not going to change by enabling emulation of a dead platform.


Except Apple doesn't support 32-bit applications on newer versions of macOS regardless of the underlying architecture, they effectively killed gaming on Apple computers when they decided to remove 32-bit support.


They're not killing userland 32-bit support; they're instead killing the 32-bit stuff that doesn't look like modern 64-bit stuff from a kernel perspective. (Basically, they're killing it at the kernel level, not the user level).


Confusingly, cutting out 32-bit mode does not break WOW64, and you can still run 32-bit applications. It keeps the ability to use the very simplified use of segmentation registers that Win32 programs use.


> only two mainstream PC OSes ever actually used more than these two rings. One was IBM's OS/2

I'm curious, would this also then affect ArcaOS?


ArcaOS is doomed, to put it mildly. Arca and eComStation were never given access to IBM source, so everything is bolted on through binary patching.


If it's still using IBM code, then it definitely will. IBM's OS/2 kernel, even the 32-bit one from 2.0 onwards, had 16-bit code and the extra ring usage for IOPL.

Then there's the fact that the rest of it is 32-bit. (-:


Hi. Article author here.

Yes, it will.

I am currently evaluating Arca OS.

It currently does not run on UEFI machines anyway, although a version that can is in beta and they're getting a copy ready for me right now.


I doubt it. This is probably talking about early versions of OS/2, like, pre-2.0.


According to https://www.os2world.com/wiki/index.php/Databook_for_OS/2_%E... OS/2 Warp 4 uses Rings 0, 2 and 3.


32-bit Xen also used ring 1.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: