Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But that makes it too easy to lie, omit, and equivocate. If the LLM is trained on all of their public statements over the last X years, and any official documents authored by them, then—theoretically!—you get something that's harder to manipulate.


Aha. I was thinking you were suggesting it be trained by the campaign. I'm usually bearish about how LLMs are being used, but if you created a corpus of legit sourced documents that humans could peruse, and added a feature to the LLM that allowed it to cite which document(s) it used to answer a question, that would be quite interesting.


"As a supporter of Ron DeSantis I can neither confirm or deny if I am more accurate than an official policy document."


Why do you think an LLM, which has no connection to the actual candidate, would be able to give you useful information about said candidate?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: