I've seen at least as many outages caused by problems in the additional complexity implemented to avoid having a single point of failure as I've seen outages caused by having one.
Plus given something like drbd having a cold spare that's trivial to spin up isn't that hard to do (and has the nice advantage of being relatively data storage technology agnostic).
The not-so-nice disadvantage that your cold spare can't actually do any work (like serve read traffic) and that if your application itself corrupts data DRBD will dutifully mirror that corruption. Hopefully the spare can still perform well with cold caches, but I guess a slow site is considerably better than a dead one.
If your secondary is doing work, then you'll get a performance degradation from losing the primary anyway.
The difference here is that once the slave's warmed up you're back to full speed, whereas with a hot-spare-being-read-from the performance degradation lasts until you bring the other box back.
Any such corruption is effectively a buggy update - normal replication will propagate a buggy write just as happily, and even if it crashed the node entirely there's a good chance your application's retry logic will re-run the write against the slave in a moment.
Plus given something like drbd having a cold spare that's trivial to spin up isn't that hard to do (and has the nice advantage of being relatively data storage technology agnostic).