Hacker Newsnew | past | comments | ask | show | jobs | submit | jeffbee's commentslogin

There are already systems that do this in hardware. Any system that has memory mirroring RAS features can do this, notably IBM zEnterprise hardware, you know, the company that this video promoter claims to be one-upping.

I don't think memory mirring features available today allow you to race two DRAM accesses and use the result that returns earlier?

The memory controller sends the read to the DIMM that is not refreshing. It is invisible to software, except for the side-effect of having better performance.

Mirroring is more of a reliability feature though, no? From my understanding it’s like RAID where you keep multiple copies plus parity so uncorrectable errors aren’t catastrophic. Makes sense for mainframes which need to survive hardware failures.

Refresh avoidance is a tangential thing the memory controller happens to be able to do in a scheme like that, but you’d really have to be looking at it in a vacuum to bill it as a benefit.

Like I said, it’s all about cache. You’re not going to DRAM if you actually care about performance fluctuations at the scale of refresh stalls.


Clearly, hitting a cache would be the better outcome. The technique suggested here could only apply to unavoidably cold reads, some kind of table that's massive and randomly accessed. Assume it exists, for whatever reason. To answer your question, refresh avoidance is an advertised benefit of hardware mirroring. Current IBM techno-advertising that you can Google yourself says this:

"IBM z17 implements an enhanced redundant array of independent memory (RAIM) design with the following features: ... Staggered memory refresh: Uses RAIM to mask memory refresh latency."


I can google, thanks. My point is that nobody is buying mainframes with redundant memory to avoid refresh stalls. It’s a mostly irrelevant freebie on hardware you bought for fault tolerance.

Do you have evidence that this is a fact? Have you looked at the computing requirements documents for, for example, stock exchanges? I have it on good evidence that stock exchanges ran on mainframes. They are essentially the counterparty (in a computing sense not a financial sense) in each placed order. If someone is willing to run a fiberoptic cable from Chicago to New York or New Jersey to exploit reduced propagation delay, admittedly much larger than a refresh stall, wouldn't you think that they or someone else would also be interested in predicting computing stalls. An exchange would face at least a significant reputational risk if it could be exploited that way.

It is not only not practical, it is a completely useless technique. I got downvoted to negative infinity for mentioning this, but I guess I am the only person who actually read the benchmark. The reason the technique "works" in the benchmark is that all the threads run free and just record their timestamps. The winner is decided post hoc. This behavior is utterly pointless for real systems. In a real system you need to decide the winner online, which means the winner needs to signal somehow that it has won, and suppress the side effects of the losers, a multi-core coordination problem that wipes out most of the benefit of the tail improvement but, more importantly, also massively worsens the median latency.

Man. You really don't get it do you.

You got downvoted for being an asshole, and if you continue to be an asshole on HN we are going to ban you. I suppose you don't believe this because we haven't done it yet even after countless warnings:

https://news.ycombinator.com/item?id=43850950 (April 2025)

https://news.ycombinator.com/item?id=43847946 (April 2025)

https://news.ycombinator.com/item?id=42096833 (Nov 2024)

https://news.ycombinator.com/item?id=37275963 (Aug 2023)

https://news.ycombinator.com/item?id=35746140 (April 2023)

https://news.ycombinator.com/item?id=34537078 (Jan 2023)

https://news.ycombinator.com/item?id=33914274 (Dec 2022)

https://news.ycombinator.com/item?id=33311881 (Oct 2022)

https://news.ycombinator.com/item?id=30890360 (April 2022)

https://news.ycombinator.com/item?id=26628758 (March 2021)

https://news.ycombinator.com/item?id=26307811 (March 2021)

https://news.ycombinator.com/item?id=25561372 (Dec 2020)

https://news.ycombinator.com/item?id=24724281 (Oct 2020)

https://news.ycombinator.com/item?id=24458954 (Sept 2020)

https://news.ycombinator.com/item?id=24380545 (Sept 2020)

https://news.ycombinator.com/item?id=23170477 (May 2020)

The reason we haven't banned you yet is because you obviously know a lot of things that are of interest to the community. That's good. But the damage you cause here by routinely poisoning the threads exceeds the goodness that you add by sharing information. This is not going to last, so if you want not to be banned on HN, please fix it.

https://news.ycombinator.com/newsguidelines.html


Meh, oversensitivity. The benefits of scholarship should outweigh any potential, if any, hurt feelings of this group.

If someone was this sensitive, that someone probably wasn't going to contribute anyway.

Sounds to me like virtue signaling.


Closed loop heat exchange costs more electricity. It's not a free lunch that data center designers are overlooking.

That is of course true, but it is at least not a totally unreasonable practice, unlike using fresh water straight off the grid as a cooling source.

I don't know if that's really true. Given realistic life cycles of equipment (~10 years, not 3 as commonly believed) the operating power is going to be 75-80% of the TCO, or more.

I don't see how that number could possibly be realistic.

A H100 cost 30k when new, and uses 500W of power.

500W for a year is about 4500kWh, which at $0.10/kWh is $450/year if run at full utilization (unrealistic).

TCO of an AI data center should be entirely dominated by capex depreciation.


In fairness your calculation looks at the most expensive element of the DC but ignores all of the associated parts required to utilize the H100: CPU, memory, cooling, etc. No to say that that flips the calculation (I don't have the answer), but it does leave a lot of power out.

Let's be generous and pretend the rest of the hardware is free but double the energy budget of the H100 to account for all of it along with cooling. You're still at only $1k/yr; $10k over 10 years, or 25% of the TCO (ignoring all other costs).

As has been repeatedly demonstrated[1], it is the presence of new, large consumers that drives down the cost of bulk power by amortizing the infrastructure investments.

Maine voters are, of course, notorious bozos in this field, having voted in a plebiscite in 2021 to cancel the link to Quebec Hydro, which was already substantially completed.

1: For example LBNL's latest banger: Factors influencing recent trends in retail electricity prices in the United States, https://www.sciencedirect.com/science/article/pii/S104061902...


This is so ignorant it hurts. The same exact proposition was voted down in New Hampshire years earlier, because the transmission line goes straight through natural forests, to Massachusetts, and has little to do with the state other than chopping down a bunch of trees. Neither Maine nor New Hampshire have an extra $1 billion to waste on enhancing the grid mainly for the benefit of southern New England states.

Neither Maine nor New Hampshire voters are "bozos" for voting it down. The whole ordeal even prompted Maine voters to establish a new law to stop foreign investors from influencing local referendums because Hydro Quebec spent so much money trying to sway the vote.


"Neither Maine nor New Hampshire voters are "bozos" for voting it down. "

I mean yes, that is how the Tragedy of the Commons works. Everyone individually makes the optimal decision for themselves but in effect you've basically hamstrung green sources of energy around the country by being very smart for your own state.

The question is, should you be allowed to this.


> in effect you've basically hamstrung green sources of energy around the country by being very smart for your own state.

> The question is, should you be allowed to this.

"...you've basically hamstrung green sources of energy"?

Well, after we stop growing corn to feed exclusively to cars and start using solar panels deployed on that land to harvest electricity for cars and houses and everything else that runs on electricity [0], if we're still short on power we can have the discussion you're itching to have.

[0] The immediately relevant discussion starts here <https://www.youtube.com/watch?v=KtQ9nt2ZeGM&t=1930s> and runs through to about 38:29, but the entire video is very, very well worth watching. If you intend to watch more of the video after ~38:29, I very strongly recommend that you start from the beginning.


Maybe Massachusetts should have offered Maine some incentive for running the power line through their territory. States make agreements like that all the time.

The line serves both states. Maine and Massachusetts are both in ISONE territory.

Do you have any links to support this? Because the commonality of all arguments _against_ has been that they make water and power crazy expensive for everyone that has to live close to the newly opened datacenters, while the DC operator enjoys subsidized land use tax, water and power.

If DCs can be harmful because of subsidized power, wouldn't the natural reaction be to stop subsidizing their power, rather than banning them?

"already substantially completed" isn't accurate. $450m of the eventual $1.65b cost had been spent at that point - so less than half.

I'd call that substantial

Indeed, considering the much of the cost in the end consists of carrying costs, litigation, and year-of-expenditure overruns that were caused by the delay.

Why on earth did they do that? Linking to a power station you didn't have to build seems like a no brainer. Was the deal that bad?

EFF has basically only succeeded in defending Section 230, which makes me wonder if the people who talk in this article and the people elsewhere on HN denouncing Section 230 know about each other.

There's been a lot of misinformation around section 230 in the last several years. This might be helpful, either as something to give out or to receive, depending.

https://www.techdirt.com/2020/06/23/hello-youve-been-referre...

Granted, it's from 2020, so there may be updated versions by now.


The immutability of extents is dictated by their SMR hardware, I believe.

I don't know the full picture behind their decision-making but immutability is much easier to reason about in a distributed system, in general.

That's true. Every system has some quantum of storage that must be handled as a unit, whether that is a logical block that can only be discarded entirely or whatever. But I think the relatively gigantic immutable extents discussed here are somewhat unusual.

Author here. With SMR, you do have large zones that are essentially immutable. However, in this case our extents and volumes are immutable because we do volume level striping for erasure coding. This mean that if any extent changes, the parities have to be rewritten as well. Others, do block level striping, so they can just move data around within disk. There are lots of trade-offs with both approaches. Also, keeping volumes/extents immutable makes reasoning through correctness much simpler.

Imagine writing a whole article like this and only including one really unclear photo with a distracting background and not indicating which bat is which in the photo.

That worked because while the link may have been slow, it was circuit-switched and generally provided the 2400 bits. "Bad wifi" is unbelievably bad compared to an old dial-up link. It's so much worse than you're imagining.

Because IMAP sucks on bad network links. It involves a huge number of round trips to synchronize the state, and re-establishing the shared state when the connection is interrupted takes forever.

A lot of online commenters refuse to believe this but the standard Gmail interface is highly optimized to cope with bad network connections, hide latency, and recover from interruptions. If you have the code assets and initial state cached in your browser, it behaves very well under bad network conditions.


yea it's fair that you can just use IMAP and sync before your trip then send after.

but I was on a flight, didn't have Gmail or Superhuman cached and could not get either to even load. I do suspect that if it were already loaded, Gmail probably would have functioned decently well.

still Gmail and Superhuman just seem...bloated. kinda cool to just have a simple, open source interface for the Gmail REST API.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: