Is it just me, or is "proof" used in two ways also – the logic and the result? To use the given example, do I need to understand the logic of the Bolzano-Weierstrass proof in order to use it?
No, those are different things. The proposition and its proof. You can assume a proposition and make use of it without a proof.
A proof furnishes the evidence that a given proposition is attainable (e.g., true).
Truly, you could even see this as being composed of three things
- The proposition
- The "stamp of truth"
- (there's a word from Kant that hits this, but I've forgotten it)
- The proof
A proof always furnishes the "stamp of truth", but you could imagine receiving the stamp of truth without receiving the proof. For instance, an authority you trust implicitly tells you it. Alternatively, take a look at all of the zero-knowledge proof work in crypto.
Additionally, there may be many proofs of a single proposition, and thus the "stamp of truth" seeks to represent them as a the equivalence class of proofs instead of giving each different method its own identity. This, formally, is the notion of "proof irrelevance". It flies in the face of computation where we care a great deal about algorithmic concerns, but from a correctness perspective perhaps all you need is the "stamp of truth".
One last way to give color here is to note what the advantage of proof _relevance_ is. In particular, if the structure of the proof itself is visible then we can do things such as _transform_ proofs (think: lisp macro). In this way, we can get more flexible systems which are allowed to port much of a proof's mechanism over to a new target automatically.
Of course, mathematicians demand proof relevance because mathematicians (a) don't trust things they can't see and (b) want to cannibalize proofs for their mechanism and to reuse those mechanisms on other problems. Proof irrelevance is often important for computational reasons: we don't want to carry around the burden of proof (literally) if we don't have to.
It is both, in the same way as a function is both its implementation and its "extension" (its effect on all inputs). You don't need to know how `sort` is implemented, if you know that it sorts; similarly, you don't need to know how Bolzano-Weierstrass is proved if you want to use it.
> It is both, in the same way as a function is both its implementation and its "extension" (its effect on all inputs).
Note that, according to definition of 'function' most common in math, this is false. A function need not even be implementable.
Aside: As someone trained in math I've always found explanations of type theory "fractally confusing" because of stuff like this: type theorists' words mean something different from my words, and in the explanation of what they do mean, they use words that also mean something different.
A function is rather like a group: it's more important as an idea than as a rigorous definition. Working mathematicians may be vaguely aware that a function "is" a set of ordered pairs (just as they may be vaguely aware that a real number "is" a Dedekind cut), but they're unlikely to think about a function in those terms when they're actually working with it (unless they're working specifically on foundational issues). Indeed the set-theoretic model of a function accommodates impossible functions - but that's one of the main problems that type theory exists to solve. The type-theoretic model of a function does not correspond directly to the ZFC model of a function, but the hope of type theory is that it should correspond more closely to the day-to-day working concept of a function (whether you're a mathematician or a programmer).
> > ... a function is both its implementation and its "extension" (its effect on all inputs).
> Note that, according to definition of 'function' most common in math, this is false.
The standard definition of function in ZFC is a set of tuples pairing every "input" with an "output", so I think the previous commenter was correct with this reading.
> A function need not even be implementable.
However, if you are alluding to computable functions, then certainly, the story is a lot more interesting!
Nope, as long there is _any_ proof (i.e. implementation logic), you can use it. Similar to software -- if you're happy with a library's interface, you don't have to read its source code in order to use it.