Hacker Newsnew | past | comments | ask | show | jobs | submit | octotoad's commentslogin

AFAIK, Galaxy was exclusive to Alpha, with no equivalent on Itanium, or any other platform.


Galaxy depended on carefully coded cooperation between kernels plus firmware support, but otherwise was operating exactly like the proposed Multikernel Architecture.

The firmware support was mainly there to provide booting of separate partitions, but otherwise no virtualisation was involved - all resources were exclusively owned.


Close. They bought the company that made it.


Same location, similar situation. I thought it was just me. At least twenty recruiters on my LinkedIn list, and not a peep since I toggled my 'Open to work' status weeks ago. Used to receive a fairly consistent stream of messages when I wasn't ready.


Applied for a few positions so far just to see whats up. Only one gave me a rejection. The rest ghosted. Moving state next year and need to find a new gig before then. Sounds like I'll need to start applying a lot more


Yep similar for me when I switched Open To Work on. Good luck out there.


> Dan Dobberpuhl was one of the original Alpha designers before he left DEC to start a chip design firm called P.A. Semi, which Apple bought in 1998 for its talent to work on their Arm processors.

P.A. Semi didn't even exist in 1998. It was founded in 2003, and was acquired by Apple in 2008.


I also found this line referring to the advantages of 64 vs 32 bits questionable:

> Bits don’t change the processing power; they just change the amount of addressable memory

Now in my personal experience since the early 1980s, increased addressable memory is the biggest advantage for most personal computer workloads. (At least until software is rewritten to use 64-bit instructions) But there are definite processing power advantages too, and for the target market of DEC’s Alpha chip this was likely an important advantage.


It is questionable and indicates the author doesn't understand CPU bit width.

There are three key bit widths that impact a CPU

Address bus: how much RAM the CPU could theoretically address.

Data bus: how many bits at a time the CPU can read from the RAM it can address.

Register: how much data an instruction can operate on at a time.

The author appears to understand only address bus width and may be influenced by early (valid, depending) complaints that the transition from 32 to 64 bit didn't benefit end users.

Going from 32-bit to 64-bit registers can provide a significant performance improvement. But only if you're doing math on 64-bit values. Without specialized instructions, 32-bit and smaller math won't improve due to the register width improvement.

Increasing the size of the data bus can provide a big performance improvement if the previous bus was narrower than the register width.

Many (most? citation needed) current, mainstream CPUs use the same bit width for all three (address, data, register).

The 8088 was notable back in the day because it had a 20 bit address bus, 16 bit registers, and an 8 bit data bus.

Edit: The Alpha had 64 bit registers -- it wasn't just a 32-bit register / data bus CPU with a 64-bit address bus.


> Many (most? citation needed) current, mainstream CPUs use the same bit width for all three (address, data, register).

This is quite wrong, for most x86_64 processors:

Address bus is 42-48 bits

Data bus is 32-128 bits

Registers are 64 bits but usable as 32/16/8 bits

Even on most ARM processors, neither the address bus nor the data bus are same width as either or the register width.


Don’t forget the SIMD registers which can be 128/256/512 wide (neon/avx2/avx512)

It’s very difficult to give just a single number for data bus width. Do you add all the DRAM channels? DDR5 has 2 x 32 bit channels where DDR4 has a 64 bit channel. What does bus width mean for PCIe, which uses very fast differential serial lines? Really, it hasn’t been a useful way to describe system performance since the 1990s.


Awesome! Thank you for posting this. I'd edit my response to include this, but it's too late.


Great more detailed yet succinct explanation of what I was trying to say


That confused me a bit, thanks. In '98, Macs still used PPC architecture, and the IPhone didn't exist. I don't know what kind of CPU the IPod was using at the time -- I'd guess ARM. Still, that seemed like it could get away with an off the shelf CPU -- buying a bunch of chip design talent doesn't seem like it would be super rational then.


For that matter, PA Semi was itself a Power licensee. They were developing the PA6T, which might have been in whatever the next PowerBook would have been. After the purchase, Apple still had to make promises about its availability because it was being used in some military applications even though they would never use the PA6T in one of their own machines. I think the AmigaOne X1000 was the only generally available computer that ever used it.


The Newton was Apple’s first big usage of ARM

https://en.m.wikipedia.org/wiki/Apple_Newton


iPod was only released in 2001, though, and the idea to make a PMP only really got steam in 2000, with much of the work only happening after Fadell was urgency is how Apple got to contract out much of the work, even he was initially contracted before getting fully hired when his proof of concept got the nod).

OTOH the ill-fated Newton was also ARM-based. As well as the emate pseudo-laptop.


Probably an XQuartz (or whatever it's called now) issue.


Makes sense, given Rob Pike was one of Limbo's creators.


IIRC VSI never got the VAX stuff in the deal with HP. I don't think HP will ever bother to bring back the hobbyist program just for VAX, unfortunately.

There are easy ways around this lack of official licenses. Ways that I can't/won't go in to here.


Obviously a 1990s signature scheme can likely be brute forced today or has some known vulnerability. I remember back in the day discovering that the license key scheme for Sun's NFS for DOS had the property that re-ordering bytes from one valid key created a second also valid key.


If you google for "VMS Liberation Front" you'll find your answer.


I "registered" for an Alpha license last week (I own several systems), but all I got was an email with credentials to access VSI's SFTP server, using a shared 'ACOMMUNITY' username. I cannot access the portal with the email address I used for this, and attempting a 'Forgot password' just returns "Error 404: Email not found.". You can see 'ICOMMUNITY' and 'XCOMMUNITY' directories in the server, but I'm not able to access them.

Has anyone that has requested a license (Alpha/Itanium) in the last few weeks actually been able to log in to the portal?


MacFUSE isn’t even open source anymore, so it would likely involve Apple forking a version prior to the license change and maintaining that themselves.


I wonder how much of Mark Shuttleworth's personal funding is still at play in keeping Canonical alive.

A quick web search returns articles claiming the company has been profitable since 2018 and that an IPO may be in the works. I can understand how something like Red Hat has managed to survive over the past twenty years, given the name it has built for itself in the "enterprise" Linux world (for better or worse), but Canonical continuing to operate off the back of Ubuntu has always baffled me.


Ubuntu has been the most popular Linux distribution for nearly 20 years. Are you surprised that translates in support contracts in the enterprise world?

It isn’t to me. People keep using what they are familiar with on their servers. Plus it’s one of the default option at most hosting provider.


To an extent, I actually am. Redhat became successful because they went after and conquered the "replace expensive proprietary unix risc server with x86+Linux but still want a support contract for CYA reasons and that's how we do it in the enterprise world".

Ubuntu is wildly popular on desktops and in the cloud (and largely for good reasons, all the phone-home and marketing etc. aside, it's a solid and polished distro), but I expect a vanishingly small fraction of Ubuntu users actually pay a dime to Canonical. Now Canonical has been trying to monetize Ubuntu in various ways over the years, some better and some worse, but I do hope they succeed so that Ubuntu is long term sustainable.


Bingo.

There is no AIX or HP-UX in the wild anymore, at least not outside of niche or legacy deployments, because RHEL ate it up.

Every enterprise I've ever been at required official licenses and official escalation and support SLAs. By offering enterprise support and reasonable response times they became the de facto winner of the Unix enterprise world.

Ubuntu still reigns supreme for individual users, but they're a drop in the bucket compared to F500 companies.


> People keep using what they are familiar with on their server.

Nobody learning Linux for the first time today is excited about Ubuntu.

I wonder if this will seriously erode Canonical's position in the Enterprise market, or if they're sufficiently entrenched to just coast for another generation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: