I’m not sure it has the self reflection capability to understand the difference between knowing and not knowing, but I would love some evidence to show this.
The only thing I can think of is that it appears to be capable of symbolic manipulation - and using this can produce output that is correct, novel (in the sense that it’s not a direct copy of any training data) and compositional at some level of abstraction, so given this, I guess it should be able to tell if it’s internal knowledge on a topic is “strong” (what is truth? Is it knowledge graph overlap?) and therefore tell when it doesn’t know, or only weakly knows something? I’m really not sure how to test this
I was more using "doesn't know" in the sense of has no evidence or training material suggesting the thing it said is true. I'm not associating actual brain functions to the current generation of AI.
The only thing I can think of is that it appears to be capable of symbolic manipulation - and using this can produce output that is correct, novel (in the sense that it’s not a direct copy of any training data) and compositional at some level of abstraction, so given this, I guess it should be able to tell if it’s internal knowledge on a topic is “strong” (what is truth? Is it knowledge graph overlap?) and therefore tell when it doesn’t know, or only weakly knows something? I’m really not sure how to test this