Thanks for the link. A holdout set which is yet to be used to verify the 25% claim. He also says that he doesn't believe that OpenAI would self-sabotage themselves by tricking the internal benchmarking performance since this will get easily exposed, either by the results from a holdout set or by the public repeating the benchmarks themselves. Seems reasonable to me.
Perhaps what he meant is that the public will be able to benchmark the model themselves by throwing different difficulty math problems at it and not necessarily the FrontierMath benchmark. It should become pretty obvious if they were faking the results or not.
It's been found [0] that slightly varying Putnam problems causes a 30% drop in o1-Preview accuracy, but that hasn't put a dent in OAI's hype.
There's absolutely no comeuppance for juicing benchmarks, especially ones no one has access to. If performance of o3 doesn't meet expectations, there'll be plenty of people making excuses for it ("You're prompting it wrong!", "That's just not its domain!").
> If performance of o3 doesn't meet expectations, there'll be plenty of people making excuses for it
I agree and I can definitely see that happening but it is also not impossible, given the incentive and impact of this technology, for some other company/community to create yet another, perhaps, FrontierMath-like benchmark to cross-validate the results.
I also don't disagree that it is not impossible for OpenAI to have faked these results. Time will tell.
Their head mathematician says they have the full dataset, except a holdout set which they're currently developing (i.e. doesn't exist yet):
https://www.reddit.com/r/singularity/comments/1i4n0r5/commen...