With the rise of these retro-looking websites, I feel it's possible again to start using a browser from the '90s. Someone should make a static-site social media platform for full compatibility.
Not so much. While a lot of these websites use classic approaches (handcrafted HTML/CSS, server-side includes, etc.) and aesthetics, the actual versions of those technologies used are often rather modern. For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect. Of course, you wouldn't want to do it that way nowadays, because it wouldn't be responsive or mobile-friendly.
(I don't think this detracts from such sites, to be clear; they're adopting new technologies where they provide practical benefits to the reader because many indieweb proponents are pushing it as a progressive, rather than reactionary, praxis.)
A couple years ago I made this https://bootstra386.com/ ... it's for a project. This is genuinely 1994 style with 1994 code that will load on 1994 browsers. It doesn't force SSL, this does work. I made sure of it.
The CSS on the page is only to make modern browsers behave like old ones in order to match the rendering.
The guestbook has some javascript if you notice to defeat spam: https://bootstra386.com/guestbook.html but it's the kind of javascript that netscape 2.0 can run without issue.
> This is genuinely 1994 style with 1994 code that will load on 1994 browsers.
Unfortunately it won’t, at least not when you’re serving it with that configuration.
It uses what used to be called “name-based virtual hosting” (before it became the norm), which looks at the Host request header to determine which site to serve. Internet Explorer 3, released in 1996, was the first version of Internet Explorer to send a Host header. I think Netscape 3, also released in 1996, might’ve been the first version to support it as well. So, for instance, Internet Explorer 2.0, released in 1995, will fail to load that site at that URL. If you test locally with localhost, for instance, then this problem won’t be apparent, because you aren’t using named-based virtual hosting in that situation.
If you need to support early-1996 browsers and older, then your site needs to be available when you request it without any Host header. In most cases, you can test this by using the IP address in your browser location bar instead of the hostname.
Edit:
At one point around 1998, it wasn’t possible to directly install Internet Explorer 4 on Windows NT 4, because it shipped with Internet Explorer 2 and microsoft.com used name-based virtual hosting, or at least their downloads section did. So the method to install Internet Explorer 4 on Windows NT 4 was to use Internet Explorer 2 to download Netscape Navigator 4, and then use Netscape Navigator 4 to download Internet Explorer 4.
Using the IP address is a tricky one for something that is supposed to be Internet facing in the 2020s.
In the modern world, one common probe performed by attackers is to see whether a site responds with its own IP address in the Host: header, or the address-to-name lookup result of the IP address in the DNS, or the well-known defaults of some WWW servers.
What they're relying upon, of course, is people/softwares allowing IP addresses and the reverse lookup domain names, but forgetting to install security controls for those as virtual hosts.
Or, equally as bad, the fallback if no Host: header is supplied being a private/internal WWW site of some kind.
> For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect.
Are they going out of their way to recreate an aesthetic that was originally the easiest thing to create given the language specs of the past, or is there something about this look and feel that is so fundamental to the idea of making websites that basically anything that looks like any era or variety of HTML will converge on it?
I'm happy they didn't choose to go full authentic with quirks mode and table-based layouts, because Firefox has some truly ancient bugs in nested table rendering... that'll never get fixed, because... no one uses them anymore!
I think the layout as such (the grid of categories) isn't particularly dated, though a modern site would style them as tiles. The centered text can feel a little dated, but the biggest thing making it feel old is that it uses the default browser styles for a lot of page elements, particularly the font.
I think it’s the former. Many of these retro layouts are pretty terrible. They existed because they were the best at the time, but using modern HTML features to recreate bad layouts from the last is just missing the point completely.
This is totally doable! It can be done with static sites + rss (and optionally email).
For example, I do this with my website. I receive comments via email (with the sender’s addresses hashed). Each page/comment-list/comment has its own rss feed that people can “subscribe” to. This allows you to get notified when someone responds to a comment you left, or comments on a page. But all notifications are opt-in and require no login because your rss reader is fetching the updates.
Since I’m the moderator of my site, I subscribe to the “all-comments” feed and get notified upon every submission. I then go review the comment and then the site rebuilds. There’s no logins or sign ups. Commenting is just pushing and notifications just pulling.
I plan on open sourcing the commenting aspect of this (it’s called https://r3ply.com) so this doesn’t have to be reinvented for each website, but comments are just one part of the whole system:
The web is the platform. RSS provides notifications (pull). Emailing provides a way to post (push) - and moderate - content. Links are for sharing and are always static (never change or break).
The one missing thing is like a “pending comments” cache, for when you occasionally get HN like traffic and need comments to be temporarily displayed immediately. I’m building this now but it’s really optional and would be the only thing in this system that even requires JS or SSR.
It does not work for people who are using web interface of e-mail only. It would be nice to provide textual instructions (sent this subject to this e-mail) instead of mailto links only.
I really like that idea. I need to add it to my own site to test it out and let it bake.
Do you think think this would work: a little icon that opens a pure html disclosure element with instructions and a design with text laid out sort of in the shape of an email.
“(Text only instructions) Send an email like this:
To: <site>@r3pl.com
Subject: <page_or_comment_url>
Body:
<write your comment here, be careful to not accidentally leave your email signature>”
Your comment system is fantastic. Looking for something like this literally for decades. Hope you will open source it soon. I would like to use it with my blog.
I loaded up Windows 98SE SP2 in a VM and tried to use it to browse the modern web but it was basically impossible since it only supported HTTP/1.1 websites. I was only able to find maybe 3-4 websites that still supported it and load.
In theory, yes, although there are some fairly big stones falling in the avalanche of turning off HTTP/0.9 and HTTP/1.0 at the server end.
In practice, it's going to be tricky to know without measurement; and the shifting of the default at the client end to from 0.9 and 1.0 to 1.1 began back in 2010. Asking the people who run robots for statistics will not help. Almost no good actor robots are using 0.9 and 1.0 now, and 0.9 and 1.0 traffic dropped off a cliff in the 2010s falling to 0% (to apparently 1 decimal place) by 2021 as measured by the Web Almanac.
If a modern HTTP server stopped serving 0.9 and 1.0, or even just had a problem doing so to decades-old pre-1.1 client softwares, very few people would know. Almost 0% of HTTP client traffic would be affected.
And, indeed, http://url.town/ is one of the very places that has already turned 0.9 off. It does not speak it, and returns a 1.1 error response. And no-one in this thread (apart from edm0nd) knew.
I tried old macOS ... sorry, Mac OS ... and yeah the main problem was SSL/TLS. HTTP/1.0 was fine but the SSL crypto algorithm negotiation never went through.
If your definition of social-media includes link aggregators, check https://brutalinks.tech. I've been working on things adjacent to that for quite a while now and I'm always looking for interested people.
The biggest issue there is that regardless of how your old your html elements, the old browsers only supported SSL 2/3, at best, and likely nothing at all, meaning you can't connect to basically any website.