Hacker Newsnew | past | comments | ask | show | jobs | submit | more SethTro's commentslogin

Do you know the subject line of the follow up email (for those laid off)?


"Notice of unemployment" (personal email address)



Ha, I completely forgot I posted it already. But it's always good to share it again.


A good blog post has lots of hard parts (layout, scope, visuals, audience). Laurence Tratt, you nailed it for me! I loved the details you put in "Benchmarking methodology" and the clean layout with instructive visuals.


You can 3D print cones if you have access to a 3d printer. I've made a few larger and smaller sets for work, travel, home.

https://www.thingiverse.com/thing:3784941


The paper has only a single benchmark reported from a single system where they report

> In our performance test, we see a speedup of 18.2x, saturating and even surpassing this estimated theoretical upper bound.

IMHO if you exceed your theoretical bound that's a sign you didn't go a good job analyzing the situation.


Yeah, I agree that they should have touched more on that. Something definitely is going on when your result exceeds what should be possible.

That said, I'm currently struggling with managing threads with my algorithm, so I'm looking forward to giving this a go.


Title is a bit sensational: Should really be "1/4 of SF charging stations are broke". They rounded 22.7% up to 33.3% bypassing the more logical/sensibleness 25%


I find your comment a bit sensational. They included 4.9% of chargers having short cables and therefore rounded 27.6% up to 33%.

Though I agree that they should not use the term "dud" to refer to chargers that are technically working but non-functional due to a design flaw (cable length). Though of course even 27.6 should round to 25% before it rounds to 33%...


I'd like more details on those "too short" cables, like did they try backing in (or pulling head in if they backed in the first time)?

I've run into a few stations with short cables, but none that couldn't reach the charge port if I parked in the other direction.


I find your comment a bit sensational. You both arrived at the same percentage.


I find your comment a bit sensational. They both parted from different percentages.


I hate absolute ratings (e.g. 5/5 with 1 vote being ranked higher than 4.99/5 with 100 votes).

All the top games seem to have 1 report giving everything positive ratings so that other fabulous games with more ratings but say a 4.97 rating are lost.

https://www.evanmiller.org/how-not-to-sort-by-average-rating...


I believe computing (or even knowing about) the Wilson score is beyond the capabilities of your typical full-stack developer, but one could at least have the common sense to hide ratings until an item has a sufficient number of them (say, 10).


3blue1brown has a great series of videos about how you should reason about these types of ratings.

https://www.youtube.com/watch?v=8idr1WZ1A7Q


https://oshpark.com/ is my favorite for low volume runs. You get 3 copies for $5/in^2


It is highly unlikely that the duplicate portions of the file will have an offset that's a multiple of 2^16 which would be required for chunks to have matching hashes. On the client side you could theoretically run lbfs over your files but on the swarm side this isn't going to happen


> It is highly unlikely that the duplicate portions of the file will have an offset that's a multiple of 2^16 which would be required for chunks to have matching hashes.

That's exactly what chunking based on a rolling hash solves. You set the average size of chunks and the content controls the exact boundaries.


Right, exactly. Chunk boundaries are not determined by fixed size chunks, but rather when the rolling hash matches some prefix, which means chunk sizes will vary but by controlling the prefix can set the average size of the prefix. Besides the lbfs paper, another nice writeup here: https://moinakg.wordpress.com/2013/06/22/high-performance-co...


Rabin fingerprinting can do this I believe.

"the idea is to select blocks not based on a specific offset but rather by some property of the block contents"

https://en.wikipedia.org/wiki/Rabin_fingerprint#Applications


In practice yes, Museums occasionally clean & restore their paintings which shifts the color balance, grain, texture similar ways.

There's a some great pre/post conservation photo in this article: http://blogs.getty.edu/iris/as-layers-of-old-varnish-are-rem...


IIUIC, the restorers are not changing the original but merely restoring the degradation that the works of art suffer. The paintings are 600 years old, you'd figure that the colors would be a lot more vibrant when it was painted than in the "pre-restoration" state.

Digital art does not have this problem.


A Vermeer painting was recently restored, and it went beyond the normal "make it look brand new": restorers removed paint layers that we've learned were someone painting over a part of Vermeer's original work to hide it. The painting now on display shows something markedly different from how centuries of viewers have seen it.

https://hyperallergic.com/672345/vermeer-restoration-finally...


>Digital art does not have this problem.

https://xkcd.com/1683/

On a less smug note, truly digital art might not have that problem, but digital representations of analogue art (which the topic of discussion falls under) do. Most movies are just digital encodings of analogue film, and it's only very recently this stopped being true.

And that's not to mention the fact that the way you that you view digital files also affects them. There's the general trend towards denser displays requiring higher resolutions for a file to appear crisp, but even the screen technology itself is a factor - take a look at how much different pixel sprites look on modern displays compared to the CRTs they were designed for:[1][2]

[1] https://twitter.com/crtpixels

[2] https://nerdlypleasures.blogspot.com/2015/03/the-case-for-co...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: