Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Self-hosting a static site with OpenBSD, httpd, and relayd (citizen428.net)
50 points by protomyth on July 16, 2022 | hide | past | favorite | 23 comments


The most interesting thing I learned from this post was OpenBSD Amsterdam hosting. I might like to use that...


Yep, I’m an OpenBSD.Amsterdam customer myself, couldn’t be happier.


This whole step with relayd is why I have a hoster taking care of all this. I'd would've never occurred to me that any of this is necessary.

Somewhere in history we took the wrong turn making a simple thing like publishing a website that complicated.


It is easy to replace httpd+relayd by Nginx, and config file IMHO will be easier to write and maintain.


It's about knowing what to configure. I'm bet my distro comes with a well pre-configured web server, but I can't be sure.

Why is all this configuring with different headers necessary to publish a web site? For me it looks like the system is broken.


That is only the case because of the relayd step, which is optional. The Let's Encrypt certificate stuff is indeed a complication that wasn't always there, but is expected now. Other than that, any old Apache or Nginx basic config works just the same as it ever did.


Thanks for the clarification!

Aside, I was not complaining about the Let's Encrypt stuff, I'm totally ok with using https nowadays.


Why do we need SSL for static blogs. Is it just that Google is pushing for everything to be HTTPS or is there a good reason that static public content needs encryption?


So your stuff does not get hijacked or tampered with. If you don't, any middle-box can scribble whatever they want all over your website or replace it entirely.


A middle box on a private network can impersonate any website, HTTPS is of no help in this case. For example:

https://www.zdnet.com/article/google-catches-french-govt-spo...


yes, and a rouge CA can sign a certificate for any site, and HTTPS in it's current state might not defend from it. (Though I think some browsers now check the presence of the Certificate Issuance in the CT logs, as a additional security measure)

Such a attack would immediately end up with the relevant CA being removed from the certificate stores across popular browsers/OS, quite detrimental to the business who owns them.

This does not mean that HTTPS doesn't provide security, compared to plain-text, it is a significantly harder to game it, verging on the edge of infeasible.


> Such a attack would immediately end up with the relevant CA being removed from the certificate stores across popular browsers/OS, quite detrimental to the business who owns them.

Absolutely, but when I look at the list of CAs (~70) that are preloaded in browsers:

* Most of them are unknown to most people => How to know if there is no malevolent CAs?

* A relevant number are from countries that are not so free. => What power has a SME like Firefox against let's say Turkey, Hungary or China states?


Unless you need to deal with companies that use certificates from countries that you do not trust, why not just delete/distrust them?

The organisation behind Firefox is the Mozilla Corporation, which is owned by the Mozilla Foundation. It's not particularly easy to find out what its legal status is, but that probably doesn't matter. It is not obliged to carry anyone's root certificates. There would be a market share cost for not doing so, but that's it.


> How to know if there is no malevolent CAs

CT logs make issuance of certificates very public, that allows the community to react to mis-issued certificates very quickly, this however doesn't stop a CA from just not submitting the cert to CT logs.

Browsers like Google Chrome check the certificate in CT logs, and would refuse to work otherwise.


How so?

For me HTTPS is there to protect the user, not the server. If you want to protect the server minimize the surface of attack, use very strong passwords, and serious resources ownership (chown) setup.


The data can be tampered with as it is going over the network if it isn't authenticated. TLS provides both privacy and integrity guarantees.


That's exactly what I am saying, HTTPS is not a protection for the server, but for the client.


It protects both. The server wants to ensure the client gets the intended data, and the client wants to ensure it gets the data the server intended to send. Neither side wants a situation where the message can be tampered with.


No one except for you is talking about protecting the server.


xupybd 5 hours ago: Why do we need SSL for =>static blogs.<=


Yes that is what I asked but I think you might be misunderstanding the answer.

If someone can hijack the DNS of a client, they can send the client to a totally different copy of the static site. The client will have no idea and the site could contain malicious code or information. With a certificate they will get an error saying this is not the correct site.

That assumes there has been no CA tampering or that they haven't managed to hijack the DNS of lets encrypt either.


Thanks for your answer.

But how to trust our browsers?

For example in Firefox there are as CAs:

- Xramp security services, but it looks like their web site is not reachable [1] A related website [0] is even not on HTTPS? How this is supposed to be a valid CA?

Yet if a certificate is signed by this CA, a browser will signal no warning (except if it tries to reach their OCSP end point).

[0] http://xramp.com

[1] www.xrampsecurity.com


You are exactly correct.

It's building a chain of trust. If any link in that chain is compromised then you have a security hole.

It's not perfect, much like locking your house only goes so far when someone can cut a hole in your wall and walk in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: