Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m not sure it has the self reflection capability to understand the difference between knowing and not knowing, but I would love some evidence to show this.

The only thing I can think of is that it appears to be capable of symbolic manipulation - and using this can produce output that is correct, novel (in the sense that it’s not a direct copy of any training data) and compositional at some level of abstraction, so given this, I guess it should be able to tell if it’s internal knowledge on a topic is “strong” (what is truth? Is it knowledge graph overlap?) and therefore tell when it doesn’t know, or only weakly knows something? I’m really not sure how to test this



I was more using "doesn't know" in the sense of has no evidence or training material suggesting the thing it said is true. I'm not associating actual brain functions to the current generation of AI.


I tried asking ChatGPT about e/acc (accelerationism) moniker some twitter users sport nowadays. Not in training data. clueless


Of course it is, that’s domain knowledge. How would it know about things that it’s never been exposed to?!

Novel compositions of existing knowledge is totally different to novel sensory input.


Well I had no idea when the moniker was started being used so I wouldn' t know if it was on the cut off knowledge date or not




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: