> If you install Debian you have to make sure that the ISO was not compromised. You can do that by calling a third-party and compare the published checksums or any out-of-band solution. Once this base of trust is established you are good to go. In fact updates are fetched over plain HTTP because all the updates are signed using GPG.
It's hard to take this seriously - do you really call a third-party every-time you rebuild a Dockerfile and run update with your package manager? 99.9% of developers deploying to production do not. If anything the best guard we have is for Debian (as in your example) to realize their keys have been compromised (probably by someone else suffering from such an attack) and alert their users, this has an inherent delay.
> Now compare with any "web crypto": every time the user loads the page he starts back from zero. An attacker can MITM at any time and inject a trivial `form.onsubmit=sendCredentials` and your user is compromised. There is no way to verify out-of-band that the distributed JavaScript is coming from ProtonMail and even if there was, it would have to be done on every page load.
As I wrote above, your concept of verification is quite unrealistic - but even with that I would say it's fair to say the danger of a SSL cert for "web crypto" being compromised is definitely greater than for the desktop since even Dockerfiles are rebuilt far-less than web-pages get reloaded, so a much greater number of users would be affected before they became aware.
I tend to think protonmail is a good thing -- it's obviously not the best choice if you need as many security guarantees as possible, but for general population this is a strong improvement over sell-all-your-data to advertisers google mail.
There's really no comparison between the two scenarios.
Hacking the MIT server or distros to replace the GPG code is an incredibly noisy attack, which will leave an evidence trail the size of Utah, home state of NSA. It's a risky attack because you don't have selectors (don't know to whom to provide the trojanized code) so you will necessarily infect more targets than necessary. There are also multiple manual and automatic tripwires, MD5 hashes, code signatures etc. that must be circumvented, and triggering any of them will blow the whole thing open and lead to public outcry. It's also currently illegal and leaves binary forensic traces in the infected machines.
On the other hand, forcing an email provider to colaborate is standard legal practice (Lavabit), you have a perfect selector (the email address of the target), there is no tripwire, and getting the key will permit you to decrypt al past and future communication. Once the user closes the browser, the evidence is gone. Easy as pie.
So while in principle the threat model is similar, the practicalities of the two situations are vastly different, questioning the whole "we want to provide practical security" mantra.
> There's really no comparison between the two scenarios. Hacking the MIT server or distros to replace the GPG code is an incredibly noisy attack..
What percentage of the packages you use in a production deployment are GPG signed? I think the minority.
> On the other hand, forcing an email provider to colaborate is standard legal practice (Lavabit), you have a perfect selector (the email address of the target), there is no tripwire, and getting the key will permit you to decrypt al past and future communication. Once the user closes the browser, the evidence is gone. Easy as pie.
This is totally irrelevant to the current conversation. As it was discussed protonmail uses client-side crypto with PGP signed messages -- meaning the only thing they store is your encrypted keys.
Yes, if Protonmail could be forced to serve passphrase stealing javascript then your encrypted keys would be vulnerable - but so would you if Debian was forced to serve a keylogger as a kernel module. btw I do agree that SSL is a much higher-risk factor than a linux-system with a local mail-server using PGP - but I don't see a better alternative than something like protonmail for the majority of consumers.
>Yes, if Protonmail could be forced to serve passphrase stealing javascript then your encrypted keys would be vulnerable - but so would you if Debian was forced to serve a keylogger as a kernel module.
That's exactly what I'm saying, the situations are not remotely comparable, reasons as stated.
There is no need to modify the ISO image on the server's disk just as there is no need to modify the ProtonMail source code on disk (both of which would make it more likely to get caught). The NSA has been known to have the ability to modify the ISO image as it was downloaded in a single selected TLS stream. So, if TLS is compromised the attack on a Linux Distro is the same as an attack on ProtonMail.
I don't know why you are talking about Docker but even for Docker, I have their public GPG key setup in my puppet installation code. Docker is installed with APT and using the same signature mechanism. Docker in turn now does container signature verification. There is a chain there.
If an attacked wants to compromise that chain he has to be present from the start and re-sign the packages trough MITM all the time. If I switch Internet connections and fetch an update I will get a signature mismatch.
Now compare to protonmail's webmail. At any point, if an attacker is able to MITM SSL then the user is compromised. Game over. The client won't even have the chance to see a signature mismatch and take appropriate actions after the fact.
I'm not saying that protonmail is a bad idea but "web crypto" definitely is in my book. It doesn't mean you can't implement another client for the desktop like you did for Android and iOS. Distributing the software and the data on different channels really makes the attacks more difficult.
> It's hard to take this seriously - do you really call a third-party every-time you rebuild a Dockerfile and run update with your package manager?
It's best practice for all these kinds of develop/deploy processes to (automatically) verify gpg signatures. If you verified your initial iso, then you know (to a certain extent) that you have a known-good gpg binary. There are ways to attack web-of-trust, and each time you add/update a trusted key there are issues -- but compare this to the number of shifty CAs all browsers trust out-of-the box. Any single one of them is enough to trick the client.
Attack scenario: disrupt client access to the Internet. Send what looks like the webmail page. On user entering the passphrase, log it, replay login details to actual webmail service (get the private key). If you're a normal attacker: download email, decrypt it. If you're a state agent, get encrypted email from intercept logs (this is assuming TLS is broken - lets hope it isn't. Or assuming that there is some way to intercept the non-tls traffic (eg: between loadbalancer and disk storage).
How would this compare to the air-gapped laptop used for traditional email? You would need to physically attack the laptop - not just have access to the ISP. That's the difference between economical (targeted or not) mass surveillance and "boots on the ground".
How does this compare to traditional non-airgapped gpg encrypted mail: in order to compromise a client that is updated via gpg-signed updates, you'd have to get a signing key, or implant one. Not simply bully any old CA out of hundreds to give you one. Then you'd have to trigger an updated somehow, or intercept one. Given the above, if you can own the clients net access this shouldn't be too hard. But the client needs to install those updates. With a web service, the client just needs to access the app.
It's hard to take this seriously - do you really call a third-party every-time you rebuild a Dockerfile and run update with your package manager? 99.9% of developers deploying to production do not. If anything the best guard we have is for Debian (as in your example) to realize their keys have been compromised (probably by someone else suffering from such an attack) and alert their users, this has an inherent delay.
> Now compare with any "web crypto": every time the user loads the page he starts back from zero. An attacker can MITM at any time and inject a trivial `form.onsubmit=sendCredentials` and your user is compromised. There is no way to verify out-of-band that the distributed JavaScript is coming from ProtonMail and even if there was, it would have to be done on every page load.
As I wrote above, your concept of verification is quite unrealistic - but even with that I would say it's fair to say the danger of a SSL cert for "web crypto" being compromised is definitely greater than for the desktop since even Dockerfiles are rebuilt far-less than web-pages get reloaded, so a much greater number of users would be affected before they became aware.
I tend to think protonmail is a good thing -- it's obviously not the best choice if you need as many security guarantees as possible, but for general population this is a strong improvement over sell-all-your-data to advertisers google mail.