Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I dotn think an LLM even can detect garbage during a training run. While training the system is only tasked with predicting the next token in the training set, it isn't trying to reason about the validity of the training set itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: