AI tools make different types of mistakes than humans, and that's a problem. We've spent eons creating systems to mitigate and correct human mistakes, which we don't have for the more subtle types of mistakes AI tends to make.
Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output.
But why would a serious person claim that they wrote this without AI when it's obvious they used it?!
Using any tool is fine, but someone bragging about not having used a tool they actually used should make you suspicious about the amount of care that went to their work.
That’s fine. Write it out yourself and then ask an AI how it could be improved with a diff. Now you’ve given it double human review (once in creation then again reviewing the diff) and single AI review.
That's one review with several steps and some AI assistance. Checking your work twice is not equivalent to it having it reviewed by two people, part of reviewing your work (or the work of others) is checking multiple times and taking advantage of whatever tools are at your disposal.
Quality prose usually only becomes that after many reviews.