Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't hold your breath. It takes ages to develop a browser that's as fast as Firefox. CSS and JS are no joke.


Ladybird more than doubled their js performane in five months between january and may and are now about 2x as slow as Safari: https://x.com/awesomekling/status/1790098727081836697

Things are progressing faster than you'd think.


First, that's uncompiled/jitted. Second: the 80/20 rule.


There are surely to be diminishing returns for sure. Ladybird is clearly improving this fast right now because the devs are picking the low hanging fruit. But we also don't need 100% parity before Ladybird usable. And when users pick it up then it begets more donations and more devs resources, which mean more improvement. So there is reason to be optimistic.


That plus a few of the team have been deeply involved in other browsers. This is nothing new for them.

Doesn't make it certain, but it is a good foundation to be working from.


Slow is the price we'll have to pay. Just like how VPNs slow down your connection

Or, if one dreams for a moment, if slower becomes the norm, web apps will have to become less complicated. Fast seems to just enable more and more ad tech


Maybe it's time to move away from whole html/css/js, http and browsers?

Let's build something that is Ad resistant from the start. Something that uses native technologies.

Edit: We need something that does not need backing of large corporations or huge funding to access the web.

Internet was always simple. We have become over dependent on browsers and http stack.


Or just don't access any content that is funded by advertising. The nonprofit web still exists. But for all content that's not someone's spare time passion project, someone's gotta foot the bill.


Yes. However, different protocols can be used for different purposes, but will need to be FOSS as well as not overly complicated specifications.

Some older protocols such as IRC, NNTP, Gopher, and email (especially plain text email and not HTML email), is one thing to be usable.

There are also some newer protocols and file formats for some uses, e.g.: Gemini protocol/file-format, Scorpion protocol/file-format, Spartan protocol (which uses the same file format as Gemini, although with an extra link type), Nightfall Express (probably the simplest one, although this means that virtual hosting will not be avavilable), and perhaps some others.

(One thing I have read somewhere (I cannot find it now) is three rules for making such a "small web" protocol: (1) Don't make it a subset; (2) Don't make it compatible; (3) Make it better for everyone (authors, readers, programmers, etc). They also discussed separation of "document web" from "application web"; I agree with that too, although of course there is the consideration of how such a separation should be working. I have ideas about this, and I believe my own designs do follow these three rules better than Gemini and Spartan do.)

I had written my own list of what "small web" protocols/file-formats that I am aware of: scorpion://zzo38computer.org/smallweb.txt (which was originally posted to Usenet, although it has been updated since then) One way to access this file would be a command such as:

  echo 'R scorpion://zzo38computer.org/smallweb.txt' | nc zzo38computer.org 1517 | less
(If you have other mirrors of this document, perhaps with your own changes, you could tell me and I could add it to the list of mirrors.)

Another proposal is the following suggestion to make a "small web browser": gemini://xavi.privatedns.org/small-web-browser.gmi (the document I linked above describes how to access this file in case you don't know) I agree with some of the points made but disagree with others; I will comment about some of these points later. (However, you could use some of these ideas for the HTTP/HTML part of a multi-protocol browser.)


Comments about gemini://xavi.privatedns.org/small-web-browser.gmi :

I do not believe that just using this existing HTTP/HTML is the way to do it (and other people also agree with me about this), although it is one way to do it, and can be combined with others.

Such a "small web" browser could be designed to support multiple protocols and file formats. So, in addition to HTTP(S), also Gopher, Gemini, Spartan, Scorpion, Nex, local files, and possibly NNTP (although this would not be as good as a dedicated news reader software, it would at least allow to read articles from a NNTP server without needing to set up your dedicated NNTP client software; Lynx also supports NNTP).

> While I do think HTTP/1.1 is good enough for most tasks [...] there are several aspects that I do not particularly like: Cookies, User agent, Referer, Etag, Cross-origin requests

I do not like these features much either. HTTP/1.1 still is good enough for many tasks, although it is still messy in some ways and more complicated than it could be, although for the purpose of accessing services that use HTTP, it will be good enough for this purpose (which is what the article describes doing). (One feature of HTTP that I think is useful that Gemini, Spartan, and Gopher lack (but Scorpion does not lack) is Range requests, although that isn't that useful for a browser and is more useful for a download manager (including command-line programs such as curl). Multiple ranges in a single request seems an unnecessarily complexity to me, though.)

> Support a small subset of HTTP/1.1, supporting GET/POST, while effectively removing support for most HTTP headers.

Agree. (You could also suppport adding arbitrary extra headers by user configuration; e.g. the user could specify that they want to add a "Accept-Language" header or a "DNT" header or whatever other arbitrary headers they might want.)

> Support a subset of HTML5, so that embedded images, audio and video are possible.

Mostly agree. Embedded images would be useful to be able to switch on/off by the user; if off then they appear as links. Embedded audio/video is probably not useful at all; I would have <audio> and <video> commands to be displayed as a list of links (the audio/video can be viewed if you follow the links).

> Support modern CSS, possibly leaving deprecated or complex features out.

I would probably leave out most of the features, although you do not necessarily have to do so. However, important would be to allow disabling CSS (and ensure that "complying with the requirements above" (see below) means that it is guaranteed to work correctly if the user chooses to disable CSS).

> Support NO JavaScript at all, as JavaScript is one of the main sources of complexity behind a modern web browser, and is typically abused for user fingerprinting.

Agree.

> Mandate the use of TLS-encrypted connections.

Disagree. Encrypted and unencrypted connections are both useful (and the URI scheme would distinguish them; this allows end users to easily filter out any sites that do not support encryption from their local index).

> Allow integration with SOCKS5 proxies e.g.: Tor.

Agree, although in addition to this, it is also sometimes useful to be able to use local programs as proxies and to have the proxy to handle TLS (although there is some complication in handling client certificates when doing so).

> Provide passwordless authentication via client certificates, and always ask for user authorization beforehand.

Agree, with both parts. (Passwords might still be implemented too (although if you don't want to, then you don't have to); HTTP has a "Authorization" header for this purpose, and Scorpion also supports something similar (in addition to supporting client certificates if the connection is encrypted).) It will be necessary to ensure that the user can command the browser to log out at any time (both with passwords and with client certificates).

> Provide a local index of sites complying with the requirements above, so that sites can be found without the use of an external search engine. [...] Such index can be updated from third-parties, similarly to package managers like APT.

I think it is a good idea.

> Custom providers can be easily added by users, so the network remains decentralised.

This is important if you are doing the above. (Being able to manually adjust the index is also helpful; see the next paragraph for why this is helpful.)

In addition to this, there is another possibility of alternate service index; in case of a link to an unsupported service (i.e. one not in the index), it can interpret it using an alternate service (e.g. to a plain HTML version of Twitter or Mastodon, or a Gemini service that displays a proxied news article, etc). In some cases, it may be able to try to figure out from the retrieved HTML or HTTP response headers, e.g. if it is a Mastodon instance. Other times the user might manually specify them when viewing them.

> Sites accessible from it can still be accessed from traditional web browsers.

OK. (If you follow my multi-protocol suggestion above, then this is not always the case; I think it is useful to have multiple ways, and this is one of them.)

> It provides guarantees on a subset of features from the modern web that do not harm users.

OK.

> Users do no longer have to worry on inspecting which websites can be trusted, as such guarantees would be provided by the browser.

This is very helpful.

> It allows reusing existing tools, both web browsers and servers.

Yes, although it is not always desirable for several reasons, e.g. for testing compatibility. (Sometimes it is desirable, though.)

> Because of the smaller set of features, it also leads to simpler code, allowing more implementations to flourish over time.

This is also helped by my suggestion to require that it works correctly if the user chooses to disable CSS.

It additionally links to a "Native Web" document. I disagree with those ideas. It is not necessarily to only allow AGPL3, since it is possible to have source code available in such a way that is compatible with AGPL3 in other ways (e.g. public domain source code without patent restrictions etc). I would use uxn/varvara which is much simpler to implement, also being more portable and avoiding the other disadvantages listed there, but it is also not as "powerful" system and not native code, so is a different disadvantage. About hardware access, I think that it should not request hardware access but only e.g. if you request audio input, the user can specify a microphone or another program or an existing audio file etc. (Solving this also can be done in my way of designing a new operating system with "proxy capabilities"; such a system could run inside of other systems as well as stand-alone, and can run native code as well as being able to emulate non-native instruction sets, so that is another way to solve it, although it is more complicated than using uxn/varvara.)



have you got any suggestions?


HTML only (with forms). Client side css only. No JavaScript. No cookies.


That's not acceptable for 99.9% of the people, so they'll stay on their current browsers. An alternative must be attractive to succeed.


I don't feel the need to fix the whole world. Just my corner of it.

I would use it to read my rss feeds. I'm sure we could make Hacker News discussions work. My mastodon feed could probably work too. That's 90% of my browser usage right there.

Just imagine how pleasant it would be to browse and navigate. Would be so fast, so responsive.

I really like https://geminiprotocol.net/ but I think they went too far removing images, sounds, video, and forms.


This is where ideals meet ugly reality. Most people cherish convenience over everything else up until the externalities cannot be ignored.

Still a long way to go if that is to happen on the web.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: