Yeah, that becomes apparent pretty early on. Annoyingly, the author refers to "the" compiler and sort of assumes the entire world is on Windows. This turns what would otherwise be a useful but incomplete reference into something more or less useless outside of people who never leave a Microsoft platform.
I have to say I run into the OS assumption quite a lot. I'm on OS X, so I tend to notice it when it's assumed Linux or assumed Windows. For me they happen roughly equally, but that is skewed by the kinds of apps I look for.
I've noticed that windows and microsoft developers tend to call it x64, while unix-like developers (linux, OS X, BSDs, etc) call it x86_64.
Does anyone have any idea why this is?
My personal theory is that mainstream Windows only runs on x86 (not counting early NT kernels, or Windows RT), so when they ported their OS to work on Intel and AMD's 64-bit cpu, that was the only 64-bit cpu that their OS could run on, so they called it x64. Unix like systems supported many other 64-bit architectures, so it was necessary for them to distinguish between the 64-bit extension to the x86 architecture (x86_64) and other 64-bit architectures they supported.
This is just a theory though, so if anyone has any concrete reasons for this difference in terminology, I'd love to hear it.
Microsoft first ported NT to IA-64 (Intel Itanium) and called it IA64. Next 64-bit port was AMD's x86-64 which they called x64. At the time AMD called it AMD64 and Intel started to call it Intel64. The terminology really didn't stabilized at the time to "x86_64".
This is the most likely reason but I see them labelled as amd64 most of the time. x64 is just s shorthand because people were so used to saying x86 for so long.