Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's strange how Nvidia just doubled down on a flawed design for no apparent reason. It doesn't even do anything, the adapter is so short you still have the same mess of cables in the front of the case as before.


This connector somehow has it's own Wikipedia page and most of it is about how bad it is. Look at the table at the end: https://en.wikipedia.org/wiki/16-pin_12VHPWR_connector#Relia...

The typical way to use these is also inherently flawed. On the nVidia FE cards, they use a vertical connector which has a bus bar connecting all pins directly in the connector. Meanwhile, the adapter has a similar bus bar where all the incoming 12V wires are soldered on to. This means you have six pins per potential connecting two bus bars. Guess how this ensures relatively even current distribution? It doesn't, at all. It relies completely on just the contact resistance between pins to match.

Contrast this with the old 8-pin design, where each pin would have it's own 2-3 ft wire to the PSU, which adds resistance in series which each pin. That in turn reduces the influence of contact resistance on current distribution. And all cards had separate shunts for metering and actively balancing current across the multiple 8-pin connectors used.

The 12VHPWR cards don't do this and the FE cards can't do this for design reasons. They all have a single 12 V plane. Only one ultra-expensive custom ASUS layout is known to have per-pin current metering and shunts (but it still has a single 12 V plane, so it can't actively balance current), and it's not known whether it is even set up to shut down when it detects a gross imbalance indicating connector failure.


Was the old design actively balanced? I think the current of each pin was only monitored.


I was under the impression it saves them money. Is that correct?

It is also a powerplay. By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

Plus probably some internal arrogance about not admitting failures.


> By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

They are free to use them, they just don’t because it is a stupid connector. The cards that need 600W are gonna need an enormous amount of cooling, therefore they will need a lot of space anyway, no point in making the connector small.

Yes, NVIDIA created an amazingly small 5090 FE, but none of the board partners have followed suit, so most customers will see no benefit at all.


That's the majority understanding, but I suspect it was a simple "update" into "same" connector - the old one was a product called Molex Mini-Fit, and the new one is their newer Micro-Fit connector.


> Plus probably some internal arrogance about not admitting failures.

Arrogance is good. Accelerates the "forced correction" (aka cluebat) process. NVIDIA needs that badly.


I doubt engineering a new connector (I think it's new? Unlike the Mini-Fit Jr which has been around for like 40-50 years) and standing up a supply chain for it could offset the potentially slightly lower BOM cost of using one specialty connector instead of three MiniFit Jr 8-pins. However, three of those would not have been enough for the 4090, nevermind the 5090.


> three of those would not have been enough for the 4090, nevermind the 5090.

Oh you are right these PCIe power connectors can only draw 150W, so you would need 4 of those for 4090/5090. I guess that makes sense then to create a new standard for it actually, hopefully they can make a newer revision of that connector that makes it safer.

In theory with the new standard you can have a single cable from the PSU to the GPU instead of 4, which would be a huge improvement. Except if you use those and then your PC catches fire, you will be blamed by the community for it. People on the reddit thread [1] were arguing that it was his own fault for using a "third party" connector.

[1] https://www.reddit.com/r/nvidia/comments/1ilhfk0/rtx_5090fe_...


EPS is practically identical to PCIe, just keyed slightly differently, and it can handle 300W. It's used for the CPU power connector and on some data centre GPUs. I've never been clear on why it didn't take over from the PCIe standard when more power was needed.


The old Mini-Fit takes 10A/pin, or theoretically 480W for 8 pin. Existing PSUs would not be rated for that much current per the PCIe harness, so the connector compatibility has to be intentionally broken for idiot proofing purposes, but connector wise up to 960W before safety margins can be technically supplied fine with just 2x PCIe 8p.


It saves them money on a four-digit MSRP. I think they could afford to be less thrifty.


>By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

I suppose but this could be overcome by AMD/Intel shipping an adapter cable


The connector is a PCI spec, it's not an Nvidia thing, it's just they introduced devices using it first.


I don't think thats correct. Nvidia used that connector first and then a similar PCI spec came out. Compatibility is limited. See https://www.hwcooling.net/en/nvidia-12pin-and-pcie-5-0-gpu-p... from back then.


I'd forgotten about the weird 30 series case, but the 40/50 series ones are the PCI spec connector.


Being a PCI spec connector doesn't mean it isn't an Nvidia thing. It seems pretty likely at this point that Nvidia forced this through, seeing as there's zero other users of this connector. Convincing PCI spec consortium to rubber stamp it probably wasn't very hard for Nvidia to do.


Can't have 4 connectors going into 1 video card, that would look ridiculous :/

- Nvidia




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: