Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The other side of this is that your LLM interaction can be leveraged to train adversarial models that find ways to still deny or delay your claim despite your LLM-advised actions.

Insurance companies, or the companies they pay to launder their involvement, would pay a lot more for that than the public would be able to.



I know, that is a worry I have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: