I do have a Windows machine with an nVidia GPU (RTX 2070 Super), but I don't use it anymore (bought it years ago).
ML on the cloud is way more convenient because you can trivially adjust your cost based on what you're doing: Training? spin up something big/expensive. Inference? cheaper (less VRAM) is usually fine.
I also like that I can run multiple instances simultaneously, something that would be prohibitively expensive if I had to have multiple machines sitting around waiting for me to use them.
I do have a Windows machine with an nVidia GPU (RTX 2070 Super), but I don't use it anymore (bought it years ago).
ML on the cloud is way more convenient because you can trivially adjust your cost based on what you're doing: Training? spin up something big/expensive. Inference? cheaper (less VRAM) is usually fine.
I also like that I can run multiple instances simultaneously, something that would be prohibitively expensive if I had to have multiple machines sitting around waiting for me to use them.