You don't seem to have understood what was tested?
The model answered the keyword prompt and spontaneously offered more details. That is, the authors were interested in whether it says "Popcorn" or "Chocolate" (or something else entirely) when the correct answer is "Popcorn" and not only does GPT-3 almost always choose "Popcorn" it also follows on to justify that by explaining that the subject is surprised.
The full data set isn't available yet (the author said they intend to provide it on the 9th of February, I suppose it's possible they'll get to it this evening) but one of the most interesting things would be what are the weirder answers. If a model says "Popcorn" 98% of the time, and "Chocolate" 0% of the time, that leaves 2% weird answers. Maybe it sometimes says "Popped corn" or "Sweet treat" or something reasonable but maybe it's fully crazy, if you talk about a bag of Popcorn labelled as Chocolate but the model sometimes picks "A fire-breathing lizard" that's pretty weird right ?
The model answered the keyword prompt and spontaneously offered more details. That is, the authors were interested in whether it says "Popcorn" or "Chocolate" (or something else entirely) when the correct answer is "Popcorn" and not only does GPT-3 almost always choose "Popcorn" it also follows on to justify that by explaining that the subject is surprised.
The full data set isn't available yet (the author said they intend to provide it on the 9th of February, I suppose it's possible they'll get to it this evening) but one of the most interesting things would be what are the weirder answers. If a model says "Popcorn" 98% of the time, and "Chocolate" 0% of the time, that leaves 2% weird answers. Maybe it sometimes says "Popped corn" or "Sweet treat" or something reasonable but maybe it's fully crazy, if you talk about a bag of Popcorn labelled as Chocolate but the model sometimes picks "A fire-breathing lizard" that's pretty weird right ?