I feel that long, unexplainable, hard-to-comprehend proofs may become commonplace when we get to AIs that are capable of discovering new proofs on their own.
I think that’s very likely and in fact even necessary to advance the field. Proofs won’t even be stored in a human-readable format. It will just be a collection of data necessary for an automated theorem prover (or verifier in this case) to connect the truth of a formal statement back to its axiomatic basis.
I don’t see any problem with that. Most of the human work will shift to designing the meta-algorithms that efficiently search “proof space” for interesting results.
Abstract math is a human hobby. Machines doing it on their own is an interesting idea, but not satisfying to the humans. May as well conjecture whatever you want, and not worry about proof at all.
How so? You can just assume that some conjecture is true and proceed with your work. (For some conjectures that is acceptable, for others, not so much.)
An interesting proof would have to show something more than just the truth: maybe it's constructive and shows how to compute something, or it shows a connection between fields previously seen as barely related. Or it uses a new trick or language that could be applied elsewhere. But I think all that requires that the proof is a bit more than transparent than just having a formally verifiable representation.
I expect that we will move in a direction that all proof. Not just steps is done by computers. The human input remaining is just to ask it what to prove.
It is quite easy to come up with questions which have well-defined answers but the solution is insanely complex. Nobody guarantees that if the question is short and the answer as well, the path from A to B will be short too.
Yes, same way computers generate strategies for games that are seemingly incomprehensible at first but after further analysis humans discover deeper underlying principals that went unnoticed