But that makes it too easy to lie, omit, and equivocate. If the LLM is trained on all of their public statements over the last X years, and any official documents authored by them, then—theoretically!—you get something that's harder to manipulate.
Aha. I was thinking you were suggesting it be trained by the campaign. I'm usually bearish about how LLMs are being used, but if you created a corpus of legit sourced documents that humans could peruse, and added a feature to the LLM that allowed it to cite which document(s) it used to answer a question, that would be quite interesting.