That is only the case because of the relayd step, which is optional. The Let's Encrypt certificate stuff is indeed a complication that wasn't always there, but is expected now. Other than that, any old Apache or Nginx basic config works just the same as it ever did.
Why do we need SSL for static blogs. Is it just that Google is pushing for everything to be HTTPS or is there a good reason that static public content needs encryption?
So your stuff does not get hijacked or tampered with. If you don't, any middle-box can scribble whatever they want all over your website or replace it entirely.
yes, and a rouge CA can sign a certificate for any site, and HTTPS in it's current state might not defend from it. (Though I think some browsers now check the presence of the Certificate Issuance in the CT logs, as a additional security measure)
Such a attack would immediately end up with the relevant CA being removed from the certificate stores across popular browsers/OS, quite detrimental to the business who owns them.
This does not mean that HTTPS doesn't provide security, compared to plain-text, it is a significantly harder to game it, verging on the edge of infeasible.
> Such a attack would immediately end up with the relevant CA being removed from the certificate stores across popular browsers/OS, quite detrimental to the business who owns them.
Absolutely, but when I look at the list of CAs (~70) that are preloaded in browsers:
* Most of them are unknown to most people => How to know if there is no malevolent CAs?
* A relevant number are from countries that are not so free. => What power has a SME like Firefox against let's say Turkey, Hungary or China states?
Unless you need to deal with companies that use certificates from countries that you do not trust, why not just delete/distrust them?
The organisation behind Firefox is the Mozilla Corporation, which is owned by the Mozilla Foundation. It's not particularly easy to find out what its legal status is, but that probably doesn't matter. It is not obliged to carry anyone's root certificates. There would be a market share cost for not doing so, but that's it.
CT logs make issuance of certificates very public, that allows the community to react to mis-issued certificates very quickly, this however doesn't stop a CA from just not submitting the cert to CT logs.
Browsers like Google Chrome check the certificate in CT logs, and would refuse to work otherwise.
For me HTTPS is there to protect the user, not the server. If you want to protect the server minimize the surface of attack, use very strong passwords, and serious resources ownership (chown) setup.
It protects both. The server wants to ensure the client gets the intended data, and the client wants to ensure it gets the data the server intended to send. Neither side wants a situation where the message can be tampered with.
Yes that is what I asked but I think you might be misunderstanding the answer.
If someone can hijack the DNS of a client, they can send the client to a totally different copy of the static site. The client will have no idea and the site could contain malicious code or information. With a certificate they will get an error saying this is not the correct site.
That assumes there has been no CA tampering or that they haven't managed to hijack the DNS of lets encrypt either.
- Xramp security services, but it looks like their web site is not reachable [1] A related website [0] is even not on HTTPS? How this is supposed to be a valid CA?
Yet if a certificate is signed by this CA, a browser will signal no warning (except if it tries to reach their OCSP end point).