Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microsoft reveals its server designs and releases open source code (arstechnica.com)
141 points by DiabloD3 on March 11, 2014 | hide | past | favorite | 51 comments


>> "We also expect this server design to contribute to our environmental sustainability efforts by reducing network cabling by 1,100 miles and metal by 10,000 tons across our base of 1 million servers."

It's cool how you can save a lot on small things when stuff get so big.


10'000 tons is 10kg per server. That does not seem like "small things" to me.


It depends on its density 10kg of copper versus 10kg of aluminum would be quite different in bulk.


This is great news.

In the same way that openly shared and critiqued communication protocol specs are a boon for incredibly performant, efficient and secure services, so too can cracking the black box on data center hardware be for building even more powerful server farms and the like.

It's cool to see Microsoft join with Facebook in being a disruption to a closely kept branch of system design. The designs are open source -- any of these threatened companies can take, learn from, and extend them.

If opening up designs like this is a threat to wages or jobs (I think the argument might be "why pay a system designer if I can grab these specs") it's the same threat presented by any technology advance. Innovate or die, as they say.

Though, it would be nice that if a company is going to pay a team to design a custom box, then open source that design, the least they could do is prominently credit the team of engineers.


I feel this trend could really hurt Dell, HP, etc. Isn't the value in their server business the server designs? There is no margin on manufacturing the hardware.


You don't cross out maintenance and support by having more efficient hardware. Those giants, by my understanding, make a lot of money from support or directly from taking care of servers/datacenters.


At scale, self-support and self-warranty is also cheaper than a support contract from HP/Dell/Lenovo.


It's no different to opencompute et al.

If hardware mffs see this as a challenge, there's an easy solution. Start building it!

The huge gap in this, opencompute, etc is that for most of us, it's nowhere near cost effective to actually have these designs fabricated. That leaves a gaping hole for supermicro, dell, etc to fill.


I wonder what Microsoft's position means to their traditional OEM partners. By slimming down their margins, this may end up as an existential threat to x86 server makers. I am curious whether they'll take it very lightly.


Some previous discussion from 1-2 months ago:

https://news.ycombinator.com/item?id=7134764


But still no Age of Empires source code Q_Q


With all these companies reporting, anyone have the cheat sheet on the rankings of the size of the owned clusters by company? I suspect NSA is tops but we'll never know (just like claims that Putin is the richest man in the world :-)


eyeroll You can lose the NSA's largest datacenter in a corner of any Google, Facebook, Microsoft, Amazon, or Apple datacenter. Even their OMG-gigantzor datacenter in Utah includes a mere 100k square feet of computer room floor area. Given government overhead and inefficiency, a 100k sq-ft NSA datacenter is probably the equivalent of a 10k sq-ft commercial datacenter.

By comparison, one of Apple's datacenter buildings is over 300k sq-ft, and Apple is at best a minor also-ran in the datacenter business.

The NSA's computing facilities are insignificant compared to commercial facilities.


That's because the commercial facilities are doing all the heavy lifting for the NSA. Facebook does all the analysis, NSA gets the finished reports.


You realize NSA has a lot more signals interception programs than just PRISM, right?


Nice to see them joining Open Compute. I find the whole open compute thing pretty interesting, having gotten the 'Google story' where the question "Why should I pay for this sheet metal that only gets in the way of me fixing a machine?" started an avalanche of other questions, to the whole "Why do we put things in RELAY RACKS anyway?"

The current tension between 'colo facilities' and 'EC2/CE/Azure' is almost palpable at times. I look forward to the first "OpenCompute Ready" hosting facility that just needs you to plug in your server or storage boards.


Seems like an obvious thing to do (either bare metal rentals, or space for cages) in Bonneville (where you can be dirt cheap in power and pretty cheap on bw) and a few other locations.


Another open compute project, another day where I cry I don't manage enough servers to get involved!


Can someone elaborate how much extra value this creates to the project?

As Facebook and others already joined, i'm curious how much added value a big company like Microsoft brings in.


The MS servers are less weird than the Facebook servers, so they may appeal to people who are just starting to get into Open Compute.


Define "less weird"


They use standard racks and standard power.


Reduction in energy cost as well as hardware costs.

Saving say 1 watt per hour power blade, scaled across a thousand servers (with 100% up time) is something like 8.76 Gigawatt-Hours of power per year (not counting cooling). At average California power rates (15.2 cents per kilowatt-hour), your saving 1.3 million dollars per year in power costs.


Did you consider a server having 1000 blades? Because otherwise 1 * 24 * 365 * 1000 = 8760000Wh, meaning 8.7MWh, not GWh, and the savings are $1300.


Unless they developed some significantly different hardware with significantly different capabilities, I'd say they don't add much.

What Microsoft may achieve is to build machines that are better suited to run Windows, as most other large scale operations don't deploy Windows servers to production. Microsoft, being the only company that has 1M Windows servers is uniquely positioned to develop it.


This and android office365 sdk. Glad to see Microsoft making a move towards the open web.


What is the motivation behind doing this? To my wary eye, I think this is another way to depress wages.


If Microsoft open sources anything, their motivation is to depress wages and destroy other people's jobs. Obviously.


Cheaper servers = companies buy more servers = more Windows Server / MSSQL / sharepoint licenses.


Joel Spolsky wrote a good article on this approach:

"Smart companies try to commoditize their products' complements."

http://www.joelonsoftware.com/articles/StrategyLetterV.html

[NB Yes it's from 2002 but the basic rules of the game stay pretty constant]


Yes. Software company works to commoditize hardware, news at 11. (This has been MS's strategy since MS DOS days, as outlined in Spolsky's piece.)


Actually, it can be traced back to the Microsoft SoftCard. They made compilers and interpreters for CP/M, but they'd have to port them to run on Apple II's, which would be an enormous undertaking. Instead, they sold a Z-80 coprocessor for Apple II computers that allowed Apple IIs to run CP/M and, therefore, to run Microsoft software.

They tried that too with MSX, which made the front-page a couple days back.

https://news.ycombinator.com/item?id=7367544


How would this depress wages?


Commoditizing servers gets rid of the extra profits Dell/HP/IBM used to make off server sales. They're basically just killing off the server industry for companies at the scale of Microsoft/Facebook.


It's already dead. I'm not sure it was ever alive. Companies like Google started building their own servers long before present scales were ever reached.


By making it appear to be easy, where in reality adding more complexity.


It's most likely due to the new CEO.


This news post is from January. Nadella was appointed in February. Although he's been leading that department for years, so I guess you could say that, but it's not due to him becoming CEO.


I wonder if it can handle paths with more than 254 characters.


...but, why would you need it?

Once your path gets longer, aren't your essentially just stuffing your file name with metadata which should belong in the file in the first place?


Wow, surprised at -4 downvotes. Handling long paths is necessary when working with Node.js, for example.



So can NTFS. In fact, you've been able to create paths with up to 32k-character file names per node since Windows NT 3.1. This is even available from Win32, and has been for a very long time: simply prefix your path with \\?\ and you're good to go.

The issue is that Explorer--and by extension, virtually every Open/Save dialog in every application--cannot handle paths longer than 254 UTF-16 bytes. I'm fairly sure at this point that Microsoft does that purely because they're not sure application programs can handle paths longer than 254 UTF-16 bytes (and they're likely correct), so they're trying to prevent users from making files that they can't open, but I find the entire situation absolutely infuriating. (In fact, as a Windows dev, it's really just that and the inability to delete open files that drive me bonkers on a regular basis at this point. Most of the rest I either like or have good workarounds for.)


C programmers (including me when I'm lazy) will often simply allocate MAX_PATH characters for a file name buffer and not think about it.


Exactly; that's the problem. The usual correct way in Windows of calling the function with `NULL` to get the length, then allocating the buffer and passing it back in, is wicked annoying with path names, so most developers simply don't bother.

Interestingly, while that problem doesn't exist in .NET, and so you'd think that .NET would finally support long file names by default, Microsoft actually goes entirely the opposite direction: the BCL strips out any leading `\\?\` in paths to force you to stay at MAX_PATH or below. The more things change...


You know there's a way around having to do those things every time you use CreateFile, right? It's a neat thing they invented many moons ago, starts with an F and ends with "nction".


Which leads to an ever-growing file called "Utilities.cpp" that gets copied into every C++ project you ever write and god forbid there's ever a bug fix you have to do because it now needs to be applied to 20 separate projects in 20 separate version control systems.


There are ways around having to apply a patch 20 times, too. Libraries and git submodules to name a few.

Secondly, if you use 20 version control systems, you're doing it horribly, horribly wrong.


Microsoft and open compute, what a circus. Microsoft and "open" don't go together.

If you want to see how open Microsoft likes it, check out M$ Secure Boot.


Had to double check and make sure I wasn't on Slashdot in 1998.


You can just turn it off. It's for boot virus protection.

Not that it makes any difference anymore. I have Ubuntu on my laptop, didn't even have to disable secureboot to do so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: