A good blog post has lots of hard parts (layout, scope, visuals, audience).
Laurence Tratt, you nailed it for me!
I loved the details you put in "Benchmarking methodology" and the clean layout with instructive visuals.
Title is a bit sensational: Should really be "1/4 of SF charging stations are broke". They rounded 22.7% up to 33.3% bypassing the more logical/sensibleness 25%
I find your comment a bit sensational. They included 4.9% of chargers having short cables and therefore rounded 27.6% up to 33%.
Though I agree that they should not use the term "dud" to refer to chargers that are technically working but non-functional due to a design flaw (cable length). Though of course even 27.6 should round to 25% before it rounds to 33%...
I hate absolute ratings (e.g. 5/5 with 1 vote being ranked higher than 4.99/5 with 100 votes).
All the top games seem to have 1 report giving everything positive ratings so that other fabulous games with more ratings but say a 4.97 rating are lost.
I believe computing (or even knowing about) the Wilson score is beyond the capabilities of your typical full-stack developer, but one could at least have the common sense to hide ratings until an item has a sufficient number of them (say, 10).
It is highly unlikely that the duplicate portions of the file will have an offset that's a multiple of 2^16 which would be required for chunks to have matching hashes. On the client side you could theoretically run lbfs over your files but on the swarm side this isn't going to happen
> It is highly unlikely that the duplicate portions of the file will have an offset that's a multiple of 2^16 which would be required for chunks to have matching hashes.
That's exactly what chunking based on a rolling hash solves. You set the average size of chunks and the content controls the exact boundaries.
Right, exactly. Chunk boundaries are not determined by fixed size chunks, but rather when the rolling hash matches some prefix, which means chunk sizes will vary but by controlling the prefix can set the average size of the prefix. Besides the lbfs paper, another nice writeup here: https://moinakg.wordpress.com/2013/06/22/high-performance-co...
IIUIC, the restorers are not changing the original but merely restoring the degradation that the works of art suffer. The paintings are 600 years old, you'd figure that the colors would be a lot more vibrant when it was painted than in the "pre-restoration" state.
A Vermeer painting was recently restored, and it went beyond the normal "make it look brand new": restorers removed paint layers that we've learned were someone painting over a part of Vermeer's original work to hide it. The painting now on display shows something markedly different from how centuries of viewers have seen it.
On a less smug note, truly digital art might not have that problem, but digital representations of analogue art (which the topic of discussion falls under) do. Most movies are just digital encodings of analogue film, and it's only very recently this stopped being true.
And that's not to mention the fact that the way you that you view digital files also affects them. There's the general trend towards denser displays requiring higher resolutions for a file to appear crisp, but even the screen technology itself is a factor - take a look at how much different pixel sprites look on modern displays compared to the CRTs they were designed for:[1][2]