It's mathematics where certain kinds of proof by contradiction are not allowed -- if you want to prove that something exists because you want to use it, then you have to make it, you can't just say "let's assume it didn't exist and go on from there to deduce that 0 = 1 which is definitely wrong, so our assumption is wrong, so it exists, so let's use it".
Here is the reason why it's obvious for mathematicians. If you're trying to prove a theorem (e.g. the theorem that class numbers of imaginary quadratic fields tend to infinity), and then someone comes up with a proof which assumes that a certain generalisation of the Riemann hypothesis is true, and then someone else comes up with a proof which assumes that the exact same generalisation is false, then we mathematicians say "great, the theorem is now proved". This actually happened.
However if your boss asks you to write some code which does a calculation, and the next day you show up in their office with two USB sticks and say "this code is guaranteed to produce the correct answer if the Riemann hypothesis is true, and this other code is guaranteed to produce the correct answer if the Riemann hypothesis is false" then you are going to lose your job, because all you did was prove that the code exists, which is less helpful than it could be.
For me one of the biggest problems with the area of formalisation of mathematics is that for decades it has been dominated by computer scientists, whose view of what is important and beautiful in mathematics does not really coincide with that of the working mathematician. This is what I am fighting to change.
Oh wow! Was it an interesting theorem that got proved this way? Did it get proved in another, less controversial manner as well? Could you provide a pointer to it?
I remember when encountering my first (or one of my first) proof by contradiction (irrationality of sqrt(2), say) feeling unsatisfied because I didn't know how one could know that it was _your_ assumption that was wrong and not one of the more fundamental ones of mathematics.
A way to think about whether something is constructive is that everything is decidable, in the sense that there is some procedure that will give you the answer in finite time.
In Kevin's example where you have two programs, one which depends on the Riemann hypothesis being true and the other which depends on it being false, is that you can't just execute the programs -- they might have undefined behavior since they depend on the truth of some statement during their executions, and there's no general algorithm to decide whether undefined behavior is happening. They depend on undecided facts, so you can't put them together into a single program that decides the result. (Maybe a slogan is "truth can be contingent, but data cannot be.")
In Lean syntax, this is an example type signature of something that takes two programs (h and h') that respectively take in the truth/proof of a proposition or its negation, along with a proof that there is at most one element of α, and produces that element. I don't think there's a way to write this definition in Lean without marking the definition noncomputable:
Yup, I do accept the law of the excluded middle. Unreservedly. From a logical point of view, I cannot really imagine what it would mean NOT to be true.
Yes, Kripke semantics makes sense of constructive logic. Topos theory, too. But I really think of all of these embedded in classical logic, and assuming that the law of excluded middle doesn't hold for general reasoning just doesn't make any kind of sense to me.
Thanks Kevin. It seems like having both approaches is more flexible and thus more general (which can hopefully only be a good thing?). One could always replace a proof by contradiction later with a direct proof, but there's definitely value in using it in the meantime (at least in my uneducated view).
p.s. could you let me know your e-mail so I can keep for future reference, as I'm also working on a hobby project for automated theorem solving.
Here is the reason why it's obvious for mathematicians. If you're trying to prove a theorem (e.g. the theorem that class numbers of imaginary quadratic fields tend to infinity), and then someone comes up with a proof which assumes that a certain generalisation of the Riemann hypothesis is true, and then someone else comes up with a proof which assumes that the exact same generalisation is false, then we mathematicians say "great, the theorem is now proved". This actually happened.
However if your boss asks you to write some code which does a calculation, and the next day you show up in their office with two USB sticks and say "this code is guaranteed to produce the correct answer if the Riemann hypothesis is true, and this other code is guaranteed to produce the correct answer if the Riemann hypothesis is false" then you are going to lose your job, because all you did was prove that the code exists, which is less helpful than it could be.
For me one of the biggest problems with the area of formalisation of mathematics is that for decades it has been dominated by computer scientists, whose view of what is important and beautiful in mathematics does not really coincide with that of the working mathematician. This is what I am fighting to change.