> But what if OpenAI decides to revoke access to that API feature I’m using?
This starts with personal computers, and why would computers at that time be called "personal"? Part of the problem with this essay is it starts with personal computers - personal computers were called personal because before that was mainframes, which were the same kind of gatekeeping that a cluster of H100s in a data center would be today.
Computers started out as these centralized IBM mainframes, but in 1975 people could buy an Altair kit, which is the same year the MOS 6502 was released. There is some centralization in neural networks now, if that displeases people they can work to do the same kind of thing that MITS and MOS and Apple and even Microsoft did.
> Flipping all those numbers to get the result (inference), and especially determining those numbers in the first place (training), requires a vast amounts of resources, data and skill.
Using a Stable Diffusion model as my base, and a number of pictures of a friend, and a day or two's work on my relatively not-so-powerful Nvidia desktop card, I can now make Stable Diffusion creations with my friend in the mix now. This can be done by different methods - textual inversion, hypernetworks, dreambooth (I have been also told LoRa works, but have not tried it myself).
On the same relatively unpowerful Nvidia card I can run the Llama LLM - with only a few billion parameters, and with quantized less precision - but results are decent enough. I have been told people are fine-tuning these types of LLMs as well.
There's nothing inherently centralized about neural networks - although OpenAI, Nvidia, Google, Facebook, Anthropic and the like tend to have the people who know the most about them, and have enormous resources to put behind development. I'm sure something with the power of an H100 will become cheaper in the coming years. I'm sure tricks will develop to allow inference and even training without the need for a massive amount of VRAM - I see this in all types of places already.
If you don't want some centralized neural network monolith - do what the people at MITS and MOS and Apple did - do what people are already doing, figuring out how to use LLMs on weaker cards with quantization, figuring out how offload some VRAM to RAM for various Pytorch operations. Centralization isn't inherent to neural networks, if you want things more decentralized then there's plenty of things that can be done to achieve that in many areas, and getting to work on that is how you achieve it.
> But what if OpenAI decides to revoke access to that API feature I’m using?
This starts with personal computers, and why would computers at that time be called "personal"? Part of the problem with this essay is it starts with personal computers - personal computers were called personal because before that was mainframes, which were the same kind of gatekeeping that a cluster of H100s in a data center would be today.
Computers started out as these centralized IBM mainframes, but in 1975 people could buy an Altair kit, which is the same year the MOS 6502 was released. There is some centralization in neural networks now, if that displeases people they can work to do the same kind of thing that MITS and MOS and Apple and even Microsoft did.
> Flipping all those numbers to get the result (inference), and especially determining those numbers in the first place (training), requires a vast amounts of resources, data and skill.
Using a Stable Diffusion model as my base, and a number of pictures of a friend, and a day or two's work on my relatively not-so-powerful Nvidia desktop card, I can now make Stable Diffusion creations with my friend in the mix now. This can be done by different methods - textual inversion, hypernetworks, dreambooth (I have been also told LoRa works, but have not tried it myself).
On the same relatively unpowerful Nvidia card I can run the Llama LLM - with only a few billion parameters, and with quantized less precision - but results are decent enough. I have been told people are fine-tuning these types of LLMs as well.
There's nothing inherently centralized about neural networks - although OpenAI, Nvidia, Google, Facebook, Anthropic and the like tend to have the people who know the most about them, and have enormous resources to put behind development. I'm sure something with the power of an H100 will become cheaper in the coming years. I'm sure tricks will develop to allow inference and even training without the need for a massive amount of VRAM - I see this in all types of places already.
If you don't want some centralized neural network monolith - do what the people at MITS and MOS and Apple did - do what people are already doing, figuring out how to use LLMs on weaker cards with quantization, figuring out how offload some VRAM to RAM for various Pytorch operations. Centralization isn't inherent to neural networks, if you want things more decentralized then there's plenty of things that can be done to achieve that in many areas, and getting to work on that is how you achieve it.