Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've really liked having this in my prompt:

> Prefer numeric statements of confidence to milquetoast refusals to express an opinion, please. Supply confidence rates both for correctness, and for completeness.

I tend to get this at the end of my responses:

> Confidence in correctness: 80%

> Confidence in completeness: 75% (there may be other factors or options to consider)

It gives me some sense of how confident the AI really is, or how much info it thinks it's leaving out of the answer.



Unfortunately the confidence rating is also hallucinated.


Oh yeah, I know ChatGPT doesn't really "know" how confident it is. But there's still some signal in it, which I find useful.


Makes me curious what the signal to noise is there. Maybe it's more misleading than helpful, or maybe the opposite




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: