> There are rumours going around that the GeForce 4090 is going to have a max TDP of 650W.
This is kind of a move of desperation. GPUs are massively parallel so it's straightforward to make performance scale linearly with power consumption. They could always have done this.
Nvidia is used to not having strong competition. Now AMD and Intel are both gunning for them and the market share is theirs to lose. Things like this are an attempt to stay on top. But the competition could just do the same thing, so what good is it?
GeForce uses Samsung 8nm that is obviously behind to competitors but their GeForce is still competitive. It means their design itself is great. Nvidia is lose the fab race, or don't want to pay more for advanced TSMC fab.
They all just use whoever they want. Not long ago AMD was on Samsung/GloFo 14nm and Nvidia on TSMC 12nm. GeForce 4090 is supposed to be on TSMC 5nm.
The RTX 3080 (Samsung 8N) is slower than the RX 6900 (TSMC N7) and has a 20W higher TDP. The RTX 3080 Ti and RTX 3090 are faster but have a 50W higher TDP. Their design isn't magic, they're just compensating for the worse process technology by using more power.
This is kind of a move of desperation. GPUs are massively parallel so it's straightforward to make performance scale linearly with power consumption. They could always have done this.
Nvidia is used to not having strong competition. Now AMD and Intel are both gunning for them and the market share is theirs to lose. Things like this are an attempt to stay on top. But the competition could just do the same thing, so what good is it?