Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know, I have to figure out another way to count money I guess, but that $200 gives me a lot of worth, far more than 200. I guess if you like sleeping and do other stuff than drive Claude Code all the time, you might have a different feeling. For us it works well.


My question wasn't if the $200 was worth it to the buyer. Renting an H100 for a month is gonna cost around $1000 ($1.33+/hr). Pretend the use isn't bursty (but really it is). If you could get 6 people on one, the company is making money selling inference.


Let me know when you can run Opus on H100.


I don't understand. Obviously I can't run Opus on an H100, only Anthropic can do that since they are the only ones with the model. I am assuming they are using H100s, and that an all-in cost for an H100 comes to less then $1000/month, and doing some back of the envelope math to say if they had a fleet of H100s at their disposal, that it would take six people running it flat out, for the $200/month plan to be profitable.


Right but it probably takes like 8-10 H100s to run Claude Opus for inference just memory wise? I'm far from an expert just asking.

Does "one" Claude Opus instance count as the full model being loaded onto however many GPUs it takes ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: