> However, AI works best with a database — and those databases are not yet there.
They do, but I would tell you that if that is your main motivation to create those databases, don't do it. The AI might spend milliseconds on work that takes man-years to compile, but get stuck on absolutely tiny issues of details it can't grok and which you can quickly explain.
If AI is the goal, I would propose to discuss with AI-researchers in the area of automated maths about what they can actually use to make progress. There are ways to have the AI learn unsupervised as much as possible, and to have it ask for supervision where needed.
In a similar way, it took a while to teach alphaGo to play go, but it only took a couple of extra years to have alphaZero play both go and chess without any database of example games.
I fully agree with the rest of the argument and I love the work that is being done to double check mathematics. But as AI researcher, I think this is a poor motivation and would warn against using it, as it might be very disappointing in the long run about how little of it turned out to be useful for AI.
If you read the article you can tell that AI is not the goal, it is a byproduct of the goal. The goal is to have confidence in the correctness of mathematical results proved by other experts.
Also, there has _already_ been some success on using AI to find proofs in lean.
There is a very active community around Lean ranging from mathematicians with no clue about computers to AI researchers trying to improve the automation. These AI researchers have explicitly said that bigger training sets will be very helpful.
But like the sibling comment says, AI is not the main point.
They do, but I would tell you that if that is your main motivation to create those databases, don't do it. The AI might spend milliseconds on work that takes man-years to compile, but get stuck on absolutely tiny issues of details it can't grok and which you can quickly explain.
If AI is the goal, I would propose to discuss with AI-researchers in the area of automated maths about what they can actually use to make progress. There are ways to have the AI learn unsupervised as much as possible, and to have it ask for supervision where needed.
In a similar way, it took a while to teach alphaGo to play go, but it only took a couple of extra years to have alphaZero play both go and chess without any database of example games.
I fully agree with the rest of the argument and I love the work that is being done to double check mathematics. But as AI researcher, I think this is a poor motivation and would warn against using it, as it might be very disappointing in the long run about how little of it turned out to be useful for AI.