Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is likely because of the knowledge cutoff.

I have seen a few cases before of "hallucinations" that turned out to be things that did exist, but no longer do.



The fix for this is for the AI to double-check all links before providing them to the user. I frequently ask ChatGPT to double check that references actually exist when it gives me them. It should be built in!


But that would mean OpenAI would lose even more money on every query.


Almost as though it’s not a sustainable business model and relies of tricking people in order to keep the lights on.


Better make each query count then.


Gemini will lie to me when I ask it to cite things, either pull up relevant sources or just hallucinate them.

IDK how you people go through that experience more than a handful of times before you get pissed off and stop using these tools. I've wasted so much time because of believable lies from these bots.

Sorry, not even lies, just bullshit. The model has no conception of truth so it can't even lie. Just outputs bullshit that happens to be true sometimes.


I have found my self doing the same "citation needed" loop - but with ai this is a dangerous game as it will now double down on whatever it made up and go looking for citations to justify its answer.

Pre prompting to cite sources is obviously a better way of going about things.


I thought people here hated it when LLMs made http requests?


It's bad when they indiscriminately crawl for training, and not ideal (but understandable) to use the Internet to communicate with them (and having online accounts associated with that etc.) rather than running them locally.

It's not bad when they use the Internet at generation time to verify the output.


Also for the most part this verification can use a HEAD request.


I don't know for certain what you're referring to, but the "bulk downloads" of the Internet that AI companies are executing for training are the problem I've seen cited, and doesn't relate to LLMs checking their sources at query time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: