Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

https://www.runpod.io/

I do have a Windows machine with an nVidia GPU (RTX 2070 Super), but I don't use it anymore (bought it years ago).

ML on the cloud is way more convenient because you can trivially adjust your cost based on what you're doing: Training? spin up something big/expensive. Inference? cheaper (less VRAM) is usually fine.

I also like that I can run multiple instances simultaneously, something that would be prohibitively expensive if I had to have multiple machines sitting around waiting for me to use them.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: