On the other hand, as hard as auditing software is, auditing humans is just about impossible. Any kind of "human safety valve" is just as much (if not more) a vector for corruption as it is a bodge to avoid having to audit for bugs.
If a trial depends on what judge you were (un)lucky enough to get then I'd consider that a far bigger flaw than the occasional misjudgement.
Which is why there are layers of accountability and protections. A system of appeals. A free press. Governor and presidential pardons, etc. Of course it's not perfect and of course it can, and has been, subject to tremendous corruption. But the idea that you can simply insert code based contracts into the maelstrom of human society and expect to avoid those problems is sheer fantasy.
Indeed, you call attention to the possibility of "occasional misjudgment" here, in this thread, where the context is about the complete and utter failure of the entire system due to defects in the system. You're standing next to a barn with not only the doors wide open but that is actively burning to the ground and you're trying to tell people that using the barn is safe, and only subject to occasional misjudgment.
For the record, I never invested anything into ETH. I'm more interested in the general concept than this specific approach.
That said, human day-to-day oversight has also caused massive failures. Eliminating failures completely isn't an option, but the question is how to reduce them in the long run.
>As hard as auditing software is, auditing humans is just about impossible.
It is the other way around. Currently, there is no software being verified to the degree that anything but almost inconsequential smart contracts would require. Instead, we deal with software the way we handle contracts: when something goes wrong (which is quite common) we intervene.
If a trial depends on what judge you were (un)lucky enough to get then I'd consider that a far bigger flaw than the occasional misjudgement.