This article might have a point about the data flywheel, but it's lost in the confused economics in the second half. Why would we expect to hire one engineer per p4.24x instance? Why do we think OpenAI needs a whole p4.24x to run fine tuning? Why do we ignore the higher costs on the inference side for fine-tuned models? Why do we think OpenAI spends _any_ money on racking-and-stacking GPUs rather than just take them at (hyperscaler) cost from Azure?