> Hallucination is a separate problem, which is solved by using fine-tuned models.
They won't solve the main cause of hallucination: prompt has zero connection to generated text other than probability.
ChatGPT do not generate answers, it comes up with something that looks like an answer. There is a good chance it is the answer, but you can't guarantee it.
I believe this particular problem won't be solved, unless researchers teach machines how to reason. But then we would have greater concerns than hallucinations.
They won't solve the main cause of hallucination: prompt has zero connection to generated text other than probability.
ChatGPT do not generate answers, it comes up with something that looks like an answer. There is a good chance it is the answer, but you can't guarantee it.
I believe this particular problem won't be solved, unless researchers teach machines how to reason. But then we would have greater concerns than hallucinations.