Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Strategic partnership enables OpenAI to build and deploy at least 10 gigawatts of AI datacenters with NVIDIA systems representing millions of GPUs

I know watts but I really can’t quantify this. How much of Nvidia is there in the amount of servers that consume 10GW? Do they all use the same chip? What if there is newer chip that consumes less, does the deal imply more servers? Did GPT write this post?



You don’t need AI to write vague waffly press releases. But to put this in perspective an H100 has a TDP of 700 watts, the newer B100s are 1000 watts I think?

Also, the idea of a newer Nvidia card using less power is très amusant.


A 72 GPUs NVL72 rack consumes up to 130kW, so it's a little more than 5 500 000 GPUs


$150-200B worth of hardware. About 2 million GPUs.

So this investment is somewhat structured like the Microsoft investment where equity was traded for Azure compute.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: