(1) doesn’t preserve sender identity: the receiver is stripping the original signature and/or ciphertext, meaning that B’s forwarding of A’s email to C is not verifiable with A as the original sending identity. Depending on the context, this either doesn’t matter or makes the entire scheme useless.
(2) and (3) have similar problems.
(4) is a TOFU-ish scheme, meaning that it’s trivial for an adversary on the wire to replace the keys.
(5) doesn’t work, full stop. E-mail wasn’t designed with encrypted headers in mind. Trying to shoehorn these things together leads to all kinds of weird misplaced user assumptions around what is signed or not, etc.
1) When I send an email to someone, why would I care whether they can forward it and prove I wrote what they forwarded? I want to prove I wrote my emails, not theirs. If Apple/GMail/Office365 feel this is important, they can include a signature for the decrypted text -- this could even be togglable. Then when you hit "forward" on an email sent with that toggle enabled it includes the signatures of the earlier senders in the chain for provability of those content blocks. But this isn't how DKIM works and I don't think it really matters. If I'm wrong about it mattering, see above for a quick solution.
2-3) What? Not seeing the problem here? Do you also think encrypted emails need to prevent someone taking a photo of their screen with their phone and sending the photo to someone else? If we can't prevent screenshots and copy/paste of decrypted content, we just shouldn't encrypt anything? What is the logic here?
4) It would also be signed by DKIM and/or the sender's private key, so your proposed MITM attack is not trivial at all. Are you aware of DKIM and PGP signing? You're trying to establish authority with "Email wasn’t designed with encrypted headers in mind [so you can't do it]" but that authority is undermined by not demonstrating understanding of modern email.
5) By the magic of controlling 90% of all end-user email in the western world, a consortium of Apple, Gmail and Office365 can do whatever they goddamned well please.
Why are you letting perfect be the enemy of the good? Regular email doesn't preserve sender identity when forwarding, and why should that even matter for encrypted mail? Encrypted clients can disable including the original email in a reply, which is totally unnecessary nowadays. And it's okay if headers are unencrypted as long as the client knows it's unsecure. All I'm seeing is a strictly better system than cleartext mail which might have holes around the edges but none that can break the integrity of single-recipient encrypted mail, which is what most people use encrypted email for anyways.
This is the rare case where the "perfect is the enemy of the good" logic doesn't apply. Encrypted email is secure messaging. Secure messaging is life-or-death (or at least life's fortune) for many ordinary people who rely on it. The only reason encrypted email is taken seriously by nerds is that none of their messages matter; it's LARP security, a thing done performatively as a social signal, and it doesn't matter if it's safe because getting hit with the LARP sword doesn't kill you.
The right way to think about secure messaging and the compromises we should be willing to accept with it is avionics or radiotherapy software. Safety is practically the only thing that matters; if you can't provide it, there's no point in talking about how convenient, open, federated, or standardized it is.
I don't follow. Simple single-sender-single-receiver would still be 100% secure so use that if it's life or death. Forwarding and multiple recipients can come with a warning that it will be unsecure. Same with the headers. Anyone actually relying on it to be secure would understand the opsec necessary to maintain this, just like any other platform.
1. It leaves metadata unprotected, which is usually just as valuable if not more so to investigators.
2. It leaves the subject unencrypted, which isn't even metadata --- it's message content.
3. It's effectively plaintext-by-default, which is why everybody who has ever used encrypted email has seen someone reply to an encrypted email with an unencrypted response that includes a transcript of the encrypted message.
4. It's based on long-term secrets and a cumbersome secret exchange process with no forward secrecy, something no other secure messenger does, because that configuration makes it just a matter of time before someone loses their key to an investigator and compromises the entire transcript --- put differently, the configuration of cryptography in secure email encourages investigators to simply record all encrypted messages in perpetuity, since they'll eventually get the one key that unlocks all of them.
These are disqualifying attributes that cryptography engineers would never accept in any modern design. The only reason they're tolerated in email is that almost everybody who uses encrypted email is doing so performatively, so that it simply doesn't matter when their counterparty replies in plaintext; it's a party foul, not the end of someone's life.
Part of this is, I think, that PGP came to popularity in the 1990s, during a time where the Internet was itself kind of a toy, and if you had a threat model, it probably involved someone you'd pissed off on EFNet IRC. If your only adversaries are script kids who are going to own you up for your mail spool, PGP does a great job!
The problem is, real-world adversaries, now that the Internet is as prolific and important as the telephone, don't play by IRC script kid rules. So much so that the plaintext content of a PGP'd email often doesn't even matter; they just need the source and destination email addresses and the time the mail was sent, to determine where to roll the van up to in order to beat the plaintext out of the recipient. Or, in the US case, 18 USC 1001 you into federal custody.
Because it’s not good, it’s bad. “Let the perfect not be the enemy of the good” applies to schemes like the Web PKI or phone numbers as identities in E2E chatting schemes, not to things that outright don’t work.
(You’ll note that even the simplest single sender case here assumes both key distribution and stable keys for users, neither of which PGP makes easy.)
I don’t understand what warrants have to do with key distribution.
The problem here is much simpler than that: you can’t encrypt to me if you don’t know my key. PGP doesn’t give you a sound way to get my key; every mechanism offered by the larger PGP ecosystem is either broken or disabled due to persistent abuse.
PGP protects against warrants if done well. But if gmail.com has a system to force-push new keys (which it basically has to do, receiver of key could get option to reject but everyone will just click "yes, accept updated public key for this contact"), then a warrant can force Gmail to utilize their existing system to push a MITM key and intercept encrypted email.
Why are we proposing schemes that are broken from the outset?
As others have pointed out: Signal (among others) does not have these problems. These problems occur because email was not meant to be encrypted (or signed); efforts to do so result in these kinds of convoluted “maybe secure, maybe not” models.
If we want people to be able to communicate privately, we should be encouraging them to use protocols that are meant that purpose.
I'm not sure signal actually solves the key distribution issue, it re-issues keys for users semi-often which have a feature to verify the new keys, but most users won't. It also has a potential frontend distribution problem through the App Store (apple/google could be compelled to distribute a compromised Signal App to specific users).
Theoretically at least Android has a TOFU-like system where the developer signs the app (although Google also has a product where Google manage the keys which developers can sign up for). That doesn't help people who are specifically targeted as it's within Google's control on most devices to change that via updates to system components or to hold back critical security updates selectively via the Play Store channel but it does raise the bar a bit.
I am really suspicious of signal, just because it's promoted so much.
I think it is secure, if, you don't install it via google and you actually do verify your contacts, but the NSA might relying on the fact that most people will not.
That's why it was proposed iCloud or whatever big tech provider does the key distribution, just like they do for iMessage. It would be a big win for Apple if they actually cared about privacy. But maybe it's just a pipe dream.
People keep proposing this as if federated key distribution is easy, and as if iCloud could with sheer force of will can just do it correctly.
There is a reason nobody serious is trying to make email encryption work: secure messaging is already incredibly hard when you control every piece of the system; with email, you control absolutely nothing.
I could easily be misunderstanding PGP, but for (1), if A signs an email with B's public key (but does not sign it with A's private key), the intent is to make the email readable only by B but not verifiable that A sent it. If A wants both (they want to have the sender (A) verified and to ensure only the recipient (B) can read it), they then also sign it with their private key. Order does not matter.
If B wants to forward it to C, they can strip off their own (B's) encryption, leaving A's (optional) private key signature, and send that to C.
C gets an email that contains a message from A which is (again, optionally) signed with A's private key, where if it was signed they can verify the A sent that part of the message. (Headers and newline mangling aside)
>If B wants to forward it to C, they can strip off their own (B's) encryption, leaving A's (optional) private key signature, and send that to C.
That could work. The PGP format is pretty modular. You could just strip out the content packet and signature packet. People don't do it because it would be confusing. Things like encrypted email list servers would be more likely to retain signatures.
Believe it or not, the possibility is actually something people have complained about in the past:
(2) and (3) have similar problems.
(4) is a TOFU-ish scheme, meaning that it’s trivial for an adversary on the wire to replace the keys.
(5) doesn’t work, full stop. E-mail wasn’t designed with encrypted headers in mind. Trying to shoehorn these things together leads to all kinds of weird misplaced user assumptions around what is signed or not, etc.