Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can finetune whisper, stable diffusion, and LLM up to about 15B parameters with 24GB VRAM.

Which leads you to what hardware to get. Best bang for the $ right now is definitely a used 3090 at ~$700. If you want more than 24GB vram just rent the hardware as it will be cheaper.

If you're not willing to drop $700 don't buy anything just rent. I have had decent luck with vast.ai



There is the world of used Nvidia Teslas, like the M40. Very cheap, but some assembly required.


I own a P100, P4, M40. They either lack ram or speed. Also, unless you are putting them in a server you have to cool them.

If your goal is to learn ML don't tinker with very obsolete hardware. Rent or buy something modern.


What do you mean by assembly needed? I looked them up and they look like normal gfx cards. Am I missing something?


They're datacenter GPUs so you need special power supplies and an adapter to connect it to a regular motherboard.


From what I’ve read cooling can also be a challenge.


What is their advantages?


“Very cheap” isn’t enough?


apparently there are a group of folks finetuning with these cards.


Source? I own 3 and I would take a single 3090 any day. M40s are simply too old.


Can we use 3090 or nvidia gpus in general with mac? do people generally have their windows dekstop for the gpus?


No clue but if you want learn/finetuned ML use a Linux box otherwise you will spend all your time fighting your machine. If you just want to run models Mac might work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: