Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

See [1] there was a recent scandal where some sort of ai app recommended a recipe that results in deadly chlorine gas as a (by)-product, I can definitely see why LLM-halucinations could be super dangeorous with recipes, I‘m unlinkely to kill someone if ChatGPT suggest a method in a module that does not exist.

[1] https://www.theguardian.com/world/2023/aug/10/pak-n-save-sav...



Note that it only recommended that because they intentionally prompted for it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: