Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why doesn't Flash get it correct, yet comes up with plausible sounding nonsense? That means it is trained on some texts in the area.

What would make 2.5 Pro (or anything else) categorically better would be if it could say "I don't know".

There will be things that Claude 3.7 or Gemini Pro will not know, and the interpolations they come up with will not make sense.



Model accuracy goes up as you use heavier models. Accuracy is always preferable and the jump from Flash to Pro is considerable.

You must rely on your own internal model in your head to verify the answers it gives.

On hallucination: it is a problem but again, it reduces as you use heavier models.


> You must rely on your own internal model in your head to verify the answers it gives

This is what significantly reduces the utility, if it can only be trusted to answer things I know the answer to, why would I ask it anything?


its the same reason I find it useful to read comments in Reddit, ask people their advice and opinions.

I have written about it here: https://news.ycombinator.com/item?id=44712300


Verification is often easier/faster than coming up with the answer totally


true! generation of an answer is much harder than verification. i wonder if a parallel can be drawn to P vs NP problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: