Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Hallucination is a separate problem, which is solved by using fine-tuned models.

They won't solve the main cause of hallucination: prompt has zero connection to generated text other than probability.

ChatGPT do not generate answers, it comes up with something that looks like an answer. There is a good chance it is the answer, but you can't guarantee it.

I believe this particular problem won't be solved, unless researchers teach machines how to reason. But then we would have greater concerns than hallucinations.



Yes, but pattern matching and probability will solve for the vast majority of usecases. Heck, it already works quite well and offers value.

I don’t need 100% truth, because reading multiple docs myself and piecing things together has tons of potential pitfalls too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: