Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The obvious intermediate step is that you add an actual expert into the workflow, in terms of using LLMs for this purpose.

Basically, add a "validate" step. So, you'd first chat with the LLM, create conclusions, then vet those conclusions with an expert specially trained to be skeptical of LLM generated content.

I would be shocked if there aren't law agencies that aren't already doing something exactly like this.



Ah, so have the lawyer do everything the GPT did so the lawyer can be sure the GPT didn't fuck up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: