12vhpwr has almost no safety margin. Any minor problem with it rapidly becomes major. 600W is scary, with reports of 800W spikes.
12V2x6 is particularly problematic because any imbalance, such as a bad connection of a single pin, will quickly push things over spec. For example, at 600W, 8.3A are carried on each pin in the connector. Molex Micro-Fit 3.0 connectors are typically rated to 8.5A -- That's almost no margin. If a single connection is bad, current per connector goes to 10A and we are over spec. And this if things are mated correctly. 8.5A-10A over a partially mated pin will rapidly heat up to the point of melting solder. Hell, the 16 gauge wire typically used is pushing it for 12V/8.5A/100W -- that's rated to 10A. Really would like to see more safety margin with 14 gauge wire.
In short, 12V2x6 has very little safety margin. Treat it with respect if you care for your hardware.
Great summary. Buildzoid over on YouTube came to a similar conclusion back during the 4xxx series issues[1], and looks like he's released a similar video today[2]. It's worth a watch as he gets well into the electrical side of things.
It's been interesting to think that we're probably been dealing with poor connections on the older Molex connectors for years, but because of the ample margins, it was never an issue. Now with the high power spec, the underlying issues with the connectors in general are a problem. While use of sense pins sorta helps, I think the overall mechanism used to make an electrical connection - which hasn't changed much in 30+ years - is probably due for a complete rethink. That will make connectors more expensive no doubt, but much of the ATX spec and surrounding ecosystem was never designed for "expansion" cards pushing 600-800w.
> I think the overall mechanism used to make an electrical connection - which hasn't changed much in 30+ years - is probably due for a complete rethink.
There are tons of high-power connectors out there, and they look and work pretty much the same as the current ones (to the untrained eye). They are just more expensive.
Though at 40A+ you tend to see more "banana" type connectors, with a cylindrical piece that has slits cut in it to deform. Those can handle tons of current.
> There are tons of high-power connectors out there, and they look and work pretty much the same as the current ones (to the untrained eye). They are just more expensive.
That's fair, so maybe not a complete rethink then. But definately a higher standard of quality. Right now, my experience with any of those molex type connectors (be it a 4 pin HDD connector or 8 pin EATX 12V or PCI-e somethingorother) is that they rely on the pin properly aligning with the holder on the other side, and if those aren't lined up, the pin can simply end up pushing the holder and it's wire back, instead of seating correctly. There's plenty of give and play in those cables, and it's hard to tell at a glance if all of them have seated correctly or if a holder has been pushed backwards in it's socket. I can imagine a higher quality connector with tighter tolerances and stiffer materials would lessen the likelihood of this happening, but no doubt with higher costs to PSUs and cards.
I suspect manufacturers are sensitive to price increases there, but I have to imagine tacking on even a few dollars to an already exorbitantly priced card that might melt otherwise is a good value? I guess we'll see.
Both male and female terminals are supposed to be retained in the plastic housing by little wings (locking tangs) that are very strong. The metal bits can wiggle in the plastic housing (feature not a bug -- something has to absorb the tolerances) but not retract, not without an extractor tool or an extreme amount of force sufficient to tear apart or fold the metal contact. Anyone who has tried to extract one of the terminals without the correct extractor tool can attest to just how much force this is. It's a lot, and the specs are also such that you should never get a metal pin tip meeting a metal edge if the plastic bits are engaged.
Of course, shitty out-of-spec molex clones abound. I have no doubt you saw what you saw, I'm coming to the defense of the specified design, which is ingenious and works extremely well at extremely low cost and loose tolerances when implemented correctly.
This [1] is also a good deep dive into the space covering the spec, limits, and materials details. For example:
> The specification for the connector and its terminals to support 450 to 600W is very precise. You are only within spec if you use glass fiber filled thermoplastic rated for 70°C temperatures and meets UL94V-0 flammability requirements. The terminals used can only be brass, never phosphor bronze, and the wire gauge must be 16g (except for the side band wires, of course).
And yet plenty of things around the house use far more than 800W and work fine. The secret is to use a more reasonable voltage.
30V or 36V or even 48V would leave a decent margin for touch safety and have dramatically lower current and even more dramatically lower resistive loss.
This is the most informative assessment in this thread.
You'd expect to see the capacity to be 125% as is common in other electrical systems.
Ratings for connectors and conductors comes with a temperature spec as well, indicating the intended operating temperature at a load. I'm sure, with this spec being near the limit of the components already, that the operating temperatures near full load are not far from the limit, either.
Couple that with materials that may not have even met that spec from the manufacturer and this is what you get. Cheaper ABS plastic on the molex instead of Nylon, PVC insulation on the wire instead of silicone, and you just know the amount of metal in the pins is the bare minimum, too.
"3rD party connectors" is being waved around by armchair critics. The connectors on the receiving end of all of this aren't some cheap knock-off, they are from a reputable manufacturer and probably exceed the baseline.
Let's not forget that the 90 series cards in each generation won't be top end forever. Soon they will just be used cards like all other technology. And someone might be building their first computer, got a good deal on eBay on a 5090 which is 5 or 6 generations old, and cobble it together with some other old parts, and maybe a weak PSU, or an older 12vHP cable
Except when the "expected tolerance" is unreasonable.
Even if the connectors and wires are to spec, the design leaves next to no margin for play. You need that margin to account for reality: Handling by casual end-users rather than trained professionals, the ambient temperature of the average room or office, dirt and grime that might get lodged and go unnoticed, wonky supply/draw of power, and more.
Running 8.3A through connections rated for 8.5A is "expected tolerance", it's also fucking stupid in no uncertain terms.
1) The designed safety margin is unacceptably low. It should be set such that any cable that complies with the expected safety tolerance for carrying current is safe to use.
2) The late-model Nvidia cards in particular have no feedback system to discover unbalanced current on 12v wires that make up the connector and no circuitry to keep the current balanced even if they did. That is, they forgo any digital control and depend on the physical properties of the conductors to be perfectly balanced.
Overall, Nvidia failed to learn from the melting connector issues in the RTX 4000 series and doubled down by increasing the power draw while further cost-cutting the safety circuitry.
I'm curious, if there are any high level electrical engineers reading this please respond.
I wonder if that vertical (as far as the PCB goes) power connector will always ensure that this sort of imbalance will always occur. While we like to pretend that current is even in any given current plane that's not what happens. The impedance of the wires and copper is not perfectly ideal. This is why these connectors have equal number of grounds, so they have an ideal shortest path and balanced return current path. So I'm curious if electrically it's just impossible to have a vertical connector like that (on that shorts all the pins for 12V together instead of current balancing them) and have it balance current across the pins. The pins closest to the board should in theory have the greatest currents as they are the shortest path electrically. Based on the pictures that appears to be the case. It appears that the pins under the most stress are likely those with the lowest impedance.
Assuming my SWAG above is correct... I'm curious if this is affected by the per pin impedance on the PSU too. Where if certain folks are just unlucky get a situation where some pins in the connector have a significantly lower impedance than the rest.
If my second SWAG is plausible, my third and really bad SWAG is that removing the two ground pins nearest the PCB could actually "balance" the current better by forcing the current to use a slightly longer path for the power pins. But, my guess is this will just cause EMI issues. So please don't test this unless you're an EE and know what you're doing.
This is pure speculation on top of what Buildzoid, the posts above this have said, and what I've learned from Robert Feranec's videos. I'm in no way an electrical engineer, just a humble hobbyist and person that loves to learn.
Paralleling wires is stable because the TCR of copper is positive. When one connection carries too much current compared to its peers, it will heat up. This will increase its resistance, causing it to accordingly carry less of the current. So the system is self-balancing.
Do not remove ground wires. That is stupid. You'll just be raising the current in the remaining wires. EMI should not be a major concern as we are talking about DC power delivery here (also why I'm saying "resistance" instead of "impedance") and so the potential for trouble by changing the number of conductors making a connection is limited. Yes, anything could happen, but that's just the nature of EMC problems.
Yeah I realized that was the worst way to go about testing that anyway right after I went to bed last night. If (big stress on if) that was the issue a ferrite bead would be a better way to test it. Based on what you're saying my SWAGs were wildly off. I'd still like to see the sims of it however to see if they provide any illumination on the issue. What makes me think something weird is going on is that it's two out of six wires heating up to absurd degrees. Of the other four two are carrying normal currents and the last two (based on Roman's video) are carrying practically nothing. Buildzoid makes the convincing argument that clearly Nvidia engineers were aware of something like this could happen on the 3090. But, then didn't carry that over to the 4090/5090.
> This is why these connectors have equal number of grounds, so they have an ideal shortest path and balanced return current path.
These connectors have an equal number of grounds and 12v because the same current flows on both sides, and the required current justifies at least 6 wires at the specified current.
Pci-e 8-pin power is a bit weird, because it's 3 12v and essentially 5 grounds; but that's because it's pci-e 6 pin and a promise that the power supply makers know what they're doing... The extra 2 grounds signal that the PSU designers are aware of the higher current limit, even though the wiring specifications are the same.
Show me the Molex logo molded on that connector end and I'll believe you.
It's all off-brand slop from companies that learned marketing by emulating western brands. Kids on the internet lap up circuitous threads about one brand being better than the other based on volume.
I watched de-Bauer's analysis this morning, and you've seemingly hit the nail on the head. Even on his test bench it looks like only two of the wires are carrying all of the power (instead of all of them, I think 4 would be nominal?) - using a thermal camera as a measuring tool. The melted specimen also has a melted wire.
Maybe 24V or 48V should be considered, and higher gauge wires - yes.
As others no doubt mention Power (loss, Watts) = I (amsp) * V (volts (delta~change on the wire)).
dV = I*R ==> dV = I * I / R -- That is, other things being equal, amps squared is the dominant factor in how much power loss occurs over a cable. In the low voltage realms most insulators are effectively the same and there's very little change in resistance relative to the voltages involved, so it's close enough to ignore.
600W @ 12V? 50A ==> 1200 * R while at 48V ~12.5A ==> 156.25 * R
A 48V system would have only ~13% the resistive losses over the cables (more importantly, at the connections!); though offhand I've heard DC to DC converters are more efficient in the range of a 1/10th step-down. I'm unsure if ~1/25th would incur more losses there, nor how well common PC PCB processes handle 48V layers.
"""
In electrical power distribution, the US National Electrical Code (NEC), NFPA 70, article 725 (2005), defines low distribution system voltage (LDSV) as up to 49 V.
The NFPA standard 79 article 6.4.1.1[4] defines distribution protected extra-low voltage (PELV) as nominal voltage of 30 Vrms or 60 V DC ripple-free for dry locations, and 6 Vrms or 15 V DC in all other cases.
Standard NFPA 70E, Article 130, 2021 Edition,[5] omits energized electrical conductors and circuit parts operating at less than 50 V from its safety requirements of work involving electrical hazards when an electrically safe work condition cannot be established.
UL standard 508A, article 43 (table 43.1) defines 0 to 20 V peak / 5 A or 20.1 to 42.4 V peak / 100 VA as low-voltage limited energy (LVLE) circuits.
"""
The UK is similar, and the English Wikipedia article doesn't cite any other country's codes, though the International standard generally talks at the power grid distribution level.
> A 48V system would have only ~13% the resistive losses over the cables (more importantly, at the connections!)
It's one-sixteenth (6.25%) actually. You correctly note that resistive losses scale with the square of the current (and current goes with reciprocal voltage), so at 4 times the voltage, you have 1/4th the current and (1/4)^2 = 1/16th the resistive losses.
I've been beating the 48v drum for years. Any inefficiency in the 48-to-1 conversion should be mostly offset by higher efficiency in the 240-or-120-to-48 conversion, I suspect it's a wash.
Every PoE device handles 48 without issue on normal PCB processes, so I don't expect that to be a big deal either. They _also_ have a big gap for galvanic isolation but that wouldn't be necessary here.
From what I can gather, one challenge with 24V and higher is that switched-mode converters, such as the buck converters used in the power stage, get a lot more inefficient when operating at high ratios.
You can see this effect in figure 6 in this[1] application note, where it's >90% efficient at ratios down to 10:2.5, but then drops to ~78% at a ratio of 10:1.
So if one goes for higher voltage perhaps 48V would be ideal, and then just accept the GPU needs a two-stage power conversion, one from 48V to 12V and the other as today.
The upside is that this would more easily allow for different ratios than today, for example 48V to 8V, then 8V to 1.2V, so that each stage has roughly the same ratio.
6 or 12, depending on how you count. There are 6 12V supply wires, and 6 GND return wires. All of them should be carrying roughly the same current - just with the GND wires in the opposite direction from the 12V ones.
> Really would like to see more safety margin with 14 gauge wire.
The wire itself really isn't the issue, the NEC in the US is notoriously cautious and 15A continuous is allowed on 14AWG conductors. Poor connectors that do not ensure good physical contact is a real problem here, and I really fail to understand the horrid design of the 12VHPWR connector. We went decades with traditional PCIe 2x6 and 2x6 power connectors with relatively few issues, and 12VHPWR does what over them? Save a little bulk?
This can't be Micro-Fit 3.0, those are only sized to accept up to 18AWG. At least, with hand crimp tooling, and that's dicey enough that I'd be amazed if Molex allowed anything larger any other way. The hand crimper for 18AWG is separate from the other tools in the series, very expensive, and a little bit quirky. Even 18AWG is pushing it with these terminals.
I would bet a lot of money more than 1 engineer at nvda flagged this as a potential issue. If you were going to run this close to the safety margin, I would at minimum add current sensing on each pin.
What is the reason the spec keeps specifying next to no headroom? Clearly that was the fundamental problem with 12VHPWR and it's being repeated with 12V2X6.
Any engineer worth his salt knows that you should leave plenty of headroom in your designs, you are not supposed to stress your components to (almost) their maximum specifications under nominal use.
I found it hilarious when a friend went to use a Tesla supercharger on his F150 Lightning. As the cable is only long enough to reach the charge port on the corner of a Telsa, he had to block 2 parking spaces and almost 3 chargers to use it. Oops... I hope all the money "saved" on copper was worth it.
12V2x6 is particularly problematic because any imbalance, such as a bad connection of a single pin, will quickly push things over spec. For example, at 600W, 8.3A are carried on each pin in the connector. Molex Micro-Fit 3.0 connectors are typically rated to 8.5A -- That's almost no margin. If a single connection is bad, current per connector goes to 10A and we are over spec. And this if things are mated correctly. 8.5A-10A over a partially mated pin will rapidly heat up to the point of melting solder. Hell, the 16 gauge wire typically used is pushing it for 12V/8.5A/100W -- that's rated to 10A. Really would like to see more safety margin with 14 gauge wire.
In short, 12V2x6 has very little safety margin. Treat it with respect if you care for your hardware.