Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel that long, unexplainable, hard-to-comprehend proofs may become commonplace when we get to AIs that are capable of discovering new proofs on their own.


I imagine they’d only be taken to be proofs if formally verified. Admittedly formal verification needn’t guarantee comprehensibility.


I assume there will be a whole industry around refactoring these proofs for human legibility.


At least I'll be on equal grounds with something. I usually find out I've hallucinated my proofs :)


I think that’s very likely and in fact even necessary to advance the field. Proofs won’t even be stored in a human-readable format. It will just be a collection of data necessary for an automated theorem prover (or verifier in this case) to connect the truth of a formal statement back to its axiomatic basis.

I don’t see any problem with that. Most of the human work will shift to designing the meta-algorithms that efficiently search “proof space” for interesting results.


Abstract math is a human hobby. Machines doing it on their own is an interesting idea, but not satisfying to the humans. May as well conjecture whatever you want, and not worry about proof at all.


While unsatisfying from an "I solved this riddle" perspective I still think that's a great thing because it will open up a lot of stuck fields.


How so? You can just assume that some conjecture is true and proceed with your work. (For some conjectures that is acceptable, for others, not so much.)

An interesting proof would have to show something more than just the truth: maybe it's constructive and shows how to compute something, or it shows a connection between fields previously seen as barely related. Or it uses a new trick or language that could be applied elsewhere. But I think all that requires that the proof is a bit more than transparent than just having a formally verifiable representation.


I expect that we will move in a direction that all proof. Not just steps is done by computers. The human input remaining is just to ask it what to prove.


It’s interesting that there are an infinite number of things to prove, but the search for which ones are interesting requires understanding.


And I wonder some about what the interesting ones have in common.


It is quite easy to come up with questions which have well-defined answers but the solution is insanely complex. Nobody guarantees that if the question is short and the answer as well, the path from A to B will be short too.


Yes, same way computers generate strategies for games that are seemingly incomprehensible at first but after further analysis humans discover deeper underlying principals that went unnoticed




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: