The comparison here is not apples to apples. While fine tuning is less costly with OpenAI, I'd argue that running inference using GPT3.5 vs a fine tuned model should be roughly the same. OpenAI is gouging you on inference, thereby being able to offer fine tuning at a seemingly reasonable price.
Also, it's important to note that fine tuning produces a vast amount of data about use cases where fine tuning is useful.
Also, it's important to note that fine tuning produces a vast amount of data about use cases where fine tuning is useful.