PGP protects against warrants if done well. But if gmail.com has a system to force-push new keys (which it basically has to do, receiver of key could get option to reject but everyone will just click "yes, accept updated public key for this contact"), then a warrant can force Gmail to utilize their existing system to push a MITM key and intercept encrypted email.
Why are we proposing schemes that are broken from the outset?
As others have pointed out: Signal (among others) does not have these problems. These problems occur because email was not meant to be encrypted (or signed); efforts to do so result in these kinds of convoluted “maybe secure, maybe not” models.
If we want people to be able to communicate privately, we should be encouraging them to use protocols that are meant that purpose.
I'm not sure signal actually solves the key distribution issue, it re-issues keys for users semi-often which have a feature to verify the new keys, but most users won't. It also has a potential frontend distribution problem through the App Store (apple/google could be compelled to distribute a compromised Signal App to specific users).
Theoretically at least Android has a TOFU-like system where the developer signs the app (although Google also has a product where Google manage the keys which developers can sign up for). That doesn't help people who are specifically targeted as it's within Google's control on most devices to change that via updates to system components or to hold back critical security updates selectively via the Play Store channel but it does raise the bar a bit.
I am really suspicious of signal, just because it's promoted so much.
I think it is secure, if, you don't install it via google and you actually do verify your contacts, but the NSA might relying on the fact that most people will not.