6 total and they spanned from 2000 to 2001. Just 1 year.
That was fairly typical at the time. It wasn't uncommon for a game publisher to patch their games, it was uncommon for that patching happen too far from the initial release. After all, they wanted their game devs working on something other than the old release. The patches were strictly just a goodwill thing to make sure the game kept selling.
The personal computing era happened partly because, while there were demands for computing, users' connectivity to the internet were poor or limited and so they couldn't just connect to the mainframe. We now have high speed internet access everywhere - I don't know what would drive the equivalent of the era of personal computing this time.
> We now have high speed internet access everywhere
As I travel a ton, I can confidently tell you, that this is still not true at all, and I’m kinda disappointed that the general rule of optimizing for bad reception died.
> the general rule of optimizing for bad reception died.
Yep, and people will look at you like you have two heads when you suggest that perhaps we should take this into account, because it adds both cost and complexity.
But I am sick to the gills of using software - be that on my laptop or my phone - that craps out constantly when I'm on the train, or in one of the many mobile reception black spots in the areas where I live and work, or because my rural broadband has decided to temporarily give up, because the software wasn't built with unreliable connections in mind.
It's not that bleeding difficult to build an app that stores state locally and can sync with a remote service when connectivity is restored, but companies don't want to make the effort because it's perceived to be a niche issue that only affects a small number of people a small proportion of the time and therefore not worth the extra effort and complexity.
Whereas I'd argue that it affects a decent proportion of people on at least a semi-regular basis so is probably worth the investment.
It's always a small crisis what app/book to install on my phone to give me 5-8 hours of reading while on a plane. I found one - Newsify, combine it with YT caching.
Moving services to the cloud unfortunately relieves a lot of the complexity of software development with respect to the menagerie of possible hardware environments.
it of course leads to a crappy user experience if they don't optimize for low bandwidth, but they don't seem to care about that, have you ever checked out how useless your algorithmic Facebook feed is now? Tons of bandwidth, very little information.
It seems like their measure is time on their website equals money in their pocket and baffling you with BS is a great way to achieve that until you never visit again in disgust and frustration.
I don't think the "menagerie of possible hardware environments" excuse holds much water these days. Even web apps still need to accommodate various screen sizes and resolutions and touch vs mouse input.
Native apps need to deal with the variety in software environments (not to say that web apps are entirely insulated from this), across several mobile and desktop operating systems. In the face of that complexity, having to compile for both x86-64 and arm64 is at most a minor nuisance.
I used to work for a company building desktop tools that were distributed to, depending on the tool, on the low end tens of thousands of users, and on the high end, hundreds of thousands. We had one tool that was nominally used by about a million people but, in actuality, the real number of active users each month was more like 300k.
I was at the company for 10 years and I can only remember one issue where we could not reproduce or figure it out on tools that I worked on. There may have been others for other tools/teams, but the number would have been tiny because these things always got talked about.
In my case the guy with the issue - who'd been super-frustrated by it for a year or more - came up to our stand when we were at a conference in the US, introduced himself, and showed me the problem he was having. He then lent me his laptop overnight[0], and I ended up installing Wireshark to see why he was experiencing massive latency on every keystroke, and what might be going on with his network shares. In the end we managed to apply a fix to our code that sidestepped the issue for users with his situation (to this day, he's been the only person - as far as I'm aware - to report this specific problem).
Our tools all ran on Windows, but obviously there were multiple extent versions of both the desktop and server OS that they were run on, different versions of the .NET runtime, at the time everyone had different AV, plus whatever other applications, services, and drivers they might have running. I won't say it was a picnic - we had a support/customer success team, after all - but the vast majority of problems weren't a function of software/OS configuration. These kinds of issues did come up, and they were a pain in the ass, but except in very rare cases - as I've described here - we were always able to find a fix or workaround.
Nowadays, with much better screensharing and remote control options, it would be way easier to deal with these sorts of problems than it was 15 - 20 years ago.
[0] Can't imagine too many organisations being happy with that in 2025.
Have you ever distributed an app on the PC to more than a million people? It might change your view. Browser issues are a different argument and I agree with you 100% there. I really wish people would pull back and hold everyone to consistent standards but they won't.
I work on a local-first app for fun and someone told me I was simply creating problems for myself and I could just be using a server. But I'm in the same boat as you. I regularly don't have good internet and I'm always surprised when people act like an internet connection is a safe assumption. Even every day I go up and down an elevator where I have no internet, I travel regularly, I go to concerts and music festivals, and so on.
I don't even travel that much, and still have trouble. Tethering at the local library or coffee shops is hit or miss, everything slows down during storms, etc.
One problem I've found in my current house is that the connection becomes flakier in heavy rain, presumably due to poor connections between the cabinet and houses. I live in Cardiff which for those unaware is one of Britain's rainiest cities. Fun times.
Access. You cannot use Starlink on a train, flight, inside buildings, etc. Starlink is also not available everywhere: https://starlink.com/map. Also, it’s not feasible to bring that with me a lot of time, for example on my backpack trips; it’s simply too large.
Because of many reasons. It's not practical to have a Starlink antenna with you everywhere. And then yes, cost is a significant factor too - even in the dialup era satellite internet connection was a thing that existed "everywhere", in theory....
Privacy. I absolutely will not ever open my personal files to an LLM over the web, and even with my mid-tier M4 Macbook I’m close to a point where I don’t have to. I wonder how much the cat is out of the back for private companies in this regard. I don’t believe the AI companies founded on stealing IP have stopped.
Not a single person I know that has any apple device would claim that, nobody cares or even knows in detail stuff we discuss here. Its HN bubble at its best.
Another point is, subjectively, added privacy compared to say South Korean products is mostly a myth. It 100% doesn't apply if you are not US citizen and even then, fingers crossed all 3-letter agencies and device creator are not over-analyzing every single data point about you continuously, is naive. What may be better is devices are harder to steal & take ownership, but for that I would need to see some serious independent comparison, not some paid PR from which HN is not completely immune to.
Centralized only became mainstream when everything started to be offered "for free". When it was buy or pay recurrently more often the choice was to buy.
There are no FOSS alternatives for consumer use unless the consumer is an IT pro or a developer. Regular people can’t use most open source software without help. Some of it, like Linux desktop stuff, has a nice enough UI that they can use it casually but they can’t install or configure or fix it.
Making software that is polished and reliable and automatic enough that non computer people can use it is a lot harder than just making software. I’d say it’s usually many times harder.
I don't think that is a software issue but a social issue nowadays. FOSS alternatives have become quite OK in my opinion.
If computers came with Debian, Firefox and Libre Office preinstalled instead of only W11, Edge and with some Office 365 trail, the relative difficulty would be gone I think.
Same thing with most IT departments only dealing with Windows in professional settings. If you even are allowed to use something different you are on your own.
Some people, but a majority see it as free. Go to your local town center and randomly poll people how much they pay for email or google search, 99% will say it is free and stop there.
> We now have high speed internet access everywhere
This is such a HN comment illustrating how little your average HN knows of the world beyond their tech bubble. Internet everywhere, you might have something of a point. But "high speed internet access everywhere" sounds like "I haven't travelled much in my life".
I remember hearing Google early in it's history had some sort of emergency back up codes that they encased in concrete to prevent them becoming a casual part of the process and they needed a jack hammer and a couple hours when the supposedly impossible happened after only a couple years.
> To their great dismay, the engineer in Australia could not open the safe because the combination was stored in the now-offline password manager.
Classic.
In my first job I worked on ATM software, and we had a big basement room full of ATMs for test purposes. The part the money is stored in is a modified safe, usually with a traditional dial lock. On the inside of one of them I saw the instructions on how to change the combination. The final instruction was: "Write down the combination and store it safely", then printed in bold: "Not inside the safe!"
> It took an additional hour for the team to realize that the green light on the smart card reader did not, in fact, indicate that the card had been inserted correctly. When the engineers flipped the card over, the service restarted and the outage ended.
There is a video from the lock pick lawyer where he receives a padlock in the mail with so much tape that it takes him whole minutes to unpack.
Concrete is nice, other options are piles of soil or brick in front of the door. There probably is a sweet spot where enough concrete slows down an excavator and enough bricks mixed in the soil slows down the shovel. Extra points if there is no place nearby to dump the rubble.
Probably one of those lost in translation or gradual exaggeration stories.
If you just wanted recovery keys that were secure from being used in an ordinary way you can use Shamir to split the key over a couple hard copies stored in safety deposit boxes a couple different locations.
The Data center I’m familiar with uses cards and biometrics but every door also has a standard key override. Not sure who opens the safe with the keys but that’s the fallback in case the electronic locks fail.
The memory is hazy since it was 15+ years ago, but I'm fairly sure I knew someone who worked at a company whose servers were stolen this way.
The thieves had access to the office building but not the server room. They realized the server room shared a wall with a room that they did have access to, so they just used a sawzall to make an additional entrance.
my across the street neighbor had some expensive bikes stolen this way. The thieves just cut a hole in the side of their garage from the alley, security cameras were facing the driveway and with nothing on the alley side. We (the neighborhood) think they were targeted specifically for the bikes as nothing else was stolen and your average crack head isn't going to make that level of effort.
A few years ago, I compared the trackpad on a ThinkPad T14 and a T14s and found that while on paper they are similar, the T14s has noticeably less friction with better tracking accuracy. I'll put blames on the PC vendors for playing around with their trackpad to sell higher end machines.
And even the T14s (gen 1) has a "cheap" trackpad with a plastic foil on top instead of a glass surface. The day before yesterday I did the upgrade from foil to glass on my T14s and it isn't the big leap I was hoping for. Sure, friction went down a bit but precession and gesture detection were good before already. The same upgrade on a T480s was a bigger improvement.
Compared to my MacBook Pro (M1 Pro) the trackpads of my ThinkPads are always worse.
Overall I'm in the same situation as the author of the article (if you substitute Arch for pop!_os with Cosmic) and totally agree with him.
The market seems incapable of producing a modern VB like development environment for the web. I have theories, but still unsure why it is like that. However, we do have a bunch of zero code tools companies that locks your logic into their ecosystem and charges quite a bit of money every month.
Also, drones are currently being flown by soldiers in fpv goggles so swarm is not very practical. It will change once we have swarm software and there is a need for it.
Or just extend the logic to materiel instead of personnel, like Ukraine did with the airbase attacks earlier this year: for the price of a few dozen < $1k drones, you can eliminate $50M-$150M+ aircraft? The asymmetry is insane.
There's also nothing that practically stops those same tactics from being aimed at other soft infrastructure targets: electrical substations, telco facilities, water treatment facilities... the nightmare scenario is taking down transmission lines and switching stations outside, say, a large nuclear power plant during a heat wave. The nuke itself is hardened, obviously, but who cares if it can't transmit the power it's generating to the people that need it?
It also took 18 months to insert the people, set up the shell company, smuggle materials, manufacture, etc. It also had the advantage of surprise - the first such attack at such a distance from the front line. Is it unlikely such an attack will be replicated, just as a box-cutter hijack of 747s attack against buildings will not succeed again.
WordPress isn't just for blogs and I think it might fit your use case for documenting a set of API endpoint. There is likely a free swagger plugin in WordPress that would help you, although I hadn't really looked.
Other than that, you could look at using a static site generator like MkDocs or Docusaurus. It'll generate a site of HTML pages, and you could either manually upload them to your host, or you could set up an automation that updates your host when you merge changes into git.
I think my response illustrates another problem with modern tools compared to the 90s - there isn't any single tool that edits HTML/CSS and upload them. You now have to glue together several tools.
The recently fixed the friction with odd number releases by providing 24 months of support.
reply