I didn't know that OpenAI added what they call organization verification process for API calls for some models. While I haven't noticed this change at work using OpenAI models, when I wanted to try GPT-5 on my personal laptop, I came across this obnoxious verification issue.
It seems that it's all because that users can get thinking traces from API calls, and OpenAI wants to prevent other companies from distilling their models.
Although I don't think OpenAI will be threatened by a single user from Korea, I don't want to go through this process for many reasons. But who knows that this kind of verification process may become norm and users will have no ways to use frontier models. "If you want to use the most advanced AI models, verify yourself so that we can track you down when something bad happens". Is it what they are saying?
It seems that it's all because that users can get thinking traces from API calls, and OpenAI wants to prevent other companies from distilling their models.
Although I don't think OpenAI will be threatened by a single user from Korea, I don't want to go through this process for many reasons. But who knows that this kind of verification process may become norm and users will have no ways to use frontier models. "If you want to use the most advanced AI models, verify yourself so that we can track you down when something bad happens". Is it what they are saying?