I know that OpenAI has made computing deals with other companies, and as time goes on, the percentage of inference that they run their models on will shift, but I doubt that much, if any, of that has moved from Microsoft Azure data centers yet, so that's not a reason for difference in model performance.
With that said, Microsoft has a different level of responsibility, both to its customers and to its stakeholders, to provide safety than OpenAI or any other frontier provider. That's not a criticism of OpenAI or Anthropic or anyone else, who I believe are all trying their best to provide safe usage. (Well, other than xAI and Grok, for which the lack of safety is a feature, not a bug.)
The risk to Microsoft of getting this wrong is simply higher than it is for other companies, and what's why they have a strong focus on Responsible AI (RAI) [1]. I don't know the details, but I have to assume there's a layer of RAI processing on models through Azure OpenAI that's not there for just using OpenAI models directly through the OpenAI API. That layer is valuable for the companies who choose to run their inference through Azure, who also want to maximize safety.
I wonder if that's where some of the observed changes are coming from. I hope the commenter posts their proof for further inspection. It would help everyone.
I'm a long-time C# dev who got into F# about five years ago. F# is so awesome, I hope that if Grace catches on, that more people will pay some attention to it.
There's no way to build something with an intention as big as "replace Git" that won't invite knee-jerk reactions.
I know I'm building the thing that aligns with my creative and technical vision. That's all I can do. It will succeed or it won't, and the reactions from people who are already super-comfortable with the existing technology matter less than the reactions from people who only understand the basics of Git and are afraid of it. I'm building it for them (which includes me).
Not with something like GitHub Secret Scanning monitoring things, or we could imagine a local ML model automatically checking every save before it gets uploaded.
This is an easily-solved problem. And in case one slips through, versions are easy to delete in Grace.
I have many other things that aren't secrets, which I still do not want to be uploaded.
Don't get me wrong, many concepts are great (such as the watch/auto rebase). But I still would base everything on top of git. Call it the network effect or whatever, but every nice concept you promote could be done with a git wrapper. Version repos are a solved problem, and git is so much a de facto standard that fighting it will be ... interesting.
> or we could imagine a local ML model automatically checking every save before it gets uploaded.
It's so 2024 to not actually engineer things but say (only say) "we tackle it with some ML model". I have goosebumps from a though that VCS would require GB data file for the model or require (let be honest here, 99% more possible, nobody creates own models) an online chatgpt access.