Never got around to writing some public docs. It's essentially bunch of GPUs on custom aluminum extrusion frames sitting in a server rack, connected to romed8-2t motherboard through pcie splitters.
Power limited to 240w, negligible performance loss while halving energy usage, uses 3 20a circuits.
Performance can range anywhere from 2x4090=1xa100 to 4x4090=1xa100 depending on models, etc.
It's great value for the money, and very easy to resell as well.
I meant each card is limited to 240w, instead of the usual 450w. Also, it's more like 4 circuits after all, because the main cpu/mb/2gpus are on a 15a too.
Ah! Ok, thank you now I get it. That's a very nice rig you have there. So at a guess you didn't care as much about the peak computing capacity as long as whatever you are doing all fits in GPU memory and this is your way of collecting that much memory in a single machine so you still have reasonable interconnect speeds between GPUs?
Yeah, it's really just trying to get as much compute as possible as cheaply as possible interconnected in a reasonably fast way with low latency. Slow networking would be a bottleneck and expensive high end networking would defeat the purpose of staying cheap.
You’d be surprised at how cheap high end networking that outperforms PCIE4 x4 is - 100Gb omni-path nics are running for 20$ on ebay! And those will saturate PCIE3 x16.
Though of course with multiple boards/ram/cpu it gets complicated again.
Note that I don't know those Ebay sellers at all, they're just some of the cheaper results showing up from searching. There seem to be plenty of other results too. :)
Power limited to 240w, negligible performance loss while halving energy usage, uses 3 20a circuits.
Performance can range anywhere from 2x4090=1xa100 to 4x4090=1xa100 depending on models, etc.
It's great value for the money, and very easy to resell as well.