You are years behind if you think you're training a model worth anything on consumer grade GPUs. Table stakes these days is 8x A100 pods, and lots of them. Luckily you can just get DGX pods so you don't have to build racks but for many orgs just renting the pods is much cheaper.
Ahh yes cause there is only one way to do Deep Learning and it is ofc stacking models large enough to not be useful outside pods of GPUs and this is for sure way to go if you want to make money (from VC ofc cause you wont have much users that are ever willing to pay so much that you'll ever make even, as was OpenAI and other big model providers, maybe you can get some money/sponsoring from state or uni).
Market for local small and efficient models running on device is pretty big maybe even biggest that exist right now [ios, android and macos are pretty easy to monetize with low cost models that are useful].
I can assure you of that and you can do it on even 4x RTX 3090 [ it wont be fast but you'll get there :) ]
Years behind what? Table stakes for what? There is much more to ML than the latest transformer and diffusion models. While those get the attention the amount of research not in that space dominates.
There is tons of value to be had from smaller models. Even some state of the art results can be obtained on a relatively small set of commodity GPUs. Not everything is GPT-scale.
Isn't a key selling point of the latest, hottest model that's on the front page of Hacker News multiple times right now, the fact that it fits on consumer-grade GPUs? Surely some of the interesting ideas it's spawning right now are people doing transfer learning on GPUs that don't end in "100", don't you think?
You know there's a huge difference between training the original model and transfer learning to apply it to a new use case, right? Saying people are years behind if they think there work is only worth something with 8 A100 pods is pretty ignorant of how most applications get built. Not everyone's trying to design novel model architectures, nor should they.