I have to be honest, I'm a bit suspicious considering the study was conducted by the company which developed the AI and that the academic auditors are all law professors and not AI professors.
I'm not surprised at all. I just joined a company that has a similar product, and everyone we demo it for is amazed at how accurate and quick it is. AI and law is a huge untapped market that's just starting to get explored.
Consider - law language is very frequently very similar, to the point where that's where the phrase "boilerplate" was adopted from for coding. Is it that surprising that AI would be very, very applicable to this kind of problem?
As long as they did a reasonable job of making sure the company running the AI didn't have the questions long enough to just go cheat, I think the law professors should have much more to say about the output than an AI professor would.