The obvious intermediate step is that you add an actual expert into the workflow, in terms of using LLMs for this purpose.
Basically, add a "validate" step. So, you'd first chat with the LLM, create conclusions, then vet those conclusions with an expert specially trained to be skeptical of LLM generated content.
I would be shocked if there aren't law agencies that aren't already doing something exactly like this.
Basically, add a "validate" step. So, you'd first chat with the LLM, create conclusions, then vet those conclusions with an expert specially trained to be skeptical of LLM generated content.
I would be shocked if there aren't law agencies that aren't already doing something exactly like this.