It is an excellent idea, so are having "waste heat networks". You could then make a heat sink at scale that is also a reservoir for reuse. This would be as simple as installing another water loop that services location just like our existing water system.
Of course the details need to be worked out, but if a business district had a WHN, it would make it easier for mom-and-pop datacenters to built in urban environments.
It wouldn't be that much different than the steam loops that lots of existing cities have in their downtown core.
On the contrary, it can be extremely difficult and expensive to do with waste heat from datacenters. Your average cooling equipment, of the energy conserving kind has 2 modes:
First mode, for cold outside weather e.g. in winter is free cooling, where you just use convection or fans to bring in cool outside air, push out warm inside air (there are also variants of this like "tokyo wheel", but those are unsuitable for heat reuse). You could at best use the warm air output (~25°C) to directly heat neighboring buildings, but the air ducts you need for that will be massive. Comfort level in the buildings heated this way will be low due to high air velocity and associated noise. Also, air ducts are a fire hazard and high maintenance to keep dust and vermin out.
The other mode is water cooling (either direct or indirect) where you cool your servers directly by water or the air through water radiators. The warm water is then either cooled down with outside air, outside air plus evaporation (both possible only when it is not too warm outside) or compressive cooling (aka heat pump, the usual big MW-scale machines in the cellar). In those cases, district heating will only be possible if you can reach a sufficient water temperature somehow. E.g. directly cooling your servers uses at most 30°C intake and outputs at most 50°C. District heating usually runs at 70°C, so you would need a running heat pump to make up for the 20K difference. When the servers are indirectly air-cooled, the difference will be even larger. So you will always need those big MW-scale heatpumps running, just to make use of your waste heat, at great expense and for the uncertain benefit of maybe selling your heat to neighbors. This is deadly for mom-and-pop datacenters because of the uncertainty (maybe you can sell your heat, maybe it'll be too expensive) and the huge investment, your cooling equipment will be far larger (more and bigger heatpumps), more expensive, redundant (because in summer you will still need to have equipment to give of heat into the outside air).
All the sufficiently large customers I know of are looking to move abroad, for this reason and the astronomical cost of electricity in Germany.
I think most DC equipment should have built in coolant loops, standardized to the point where you can order all the equipment and just plug it in.
Spec that DC components can run at a much higher temperature.
The issue is that DC operators get to trade money for a natural resource (water, power) for lower up front build costs. Owner operators do a much better job, but Google still extracts billions of gallons of water from municipal supplies, often even ground water which I think should be a crime.
I'd also like to see the cost of cooling be passed on to the cloud customer. Mixing it into the hourly price causes a tragedy of the commons.
But cost of cooling is passed on to the cloud customer. Different zones/DCs/regions have different hourly cost which is a function of, among other things, the local price of electricity and cooling.
It is very resource - intensive to build these low-grade waste heat networks, and you could achieve much more for the climate by NOT building them and investing the effort elsewhere - for example in building out renewable energy sources.
The market is very good at figuring these things out and you can push it in the right direction by putting a price on carbon.