Hacker Newsnew | past | comments | ask | show | jobs | submit | chrchang523's commentslogin

Note that BGZF solves gzip’s speed problem (libdeflate + parallel compression/decompression) without breaking compatibility, and usually the hit to compression ratio is tolerable.


Yes, though I think tooling could be better; if I had more spare time I'd write a linter which flagged defers in loops that didn't come with an accompanying comment.


I always think, Go is open source why not just fork it to add feature XYZ, then I realize I am better off using languages whose communities appreciate modern language design, instead of wasting my spare time with such things.


The problem with cgo is the high function-call overhead; you only want to use it for fairly big chunks of work. Calling an assembly function from Go is a lot cheaper.

https://pkg.go.dev/github.com/grailbio/base/simd has some work I’ve done in this vein.


I found it useful to walk through evaluation of a few elementary instances of this class using simpler methods, to put the main result in perspective. Specifically, replace the initial 3 exponent with 0 or 1.

If the exponent is 0, then you have the sum 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ..., from Zeno's most famous paradox (https://en.wikipedia.org/wiki/Zeno%27s_paradoxes ). If you are fortunate, you previously learned that this converges to 1, and played around with this enough in your head to have a solid understanding of why. If you are less fortunate, I recommend pausing to digest this result.

Then, if the exponent is 1, you have the sum 1/2 + 2/4 + 3/8 + 4/16 + 5/32 + ... .

What happens if we subtract (1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ...) from it? We have (1/4 + 2/8 + 3/16 + 4/32 + ...) left over.

Then, if we subtract (1/4 + 1/8 + 1/16 + 1/32 + ...) from the latter, we still have (1/8 + 2/16 + 3/32 + ...) left over.

Then, if we subtract (1/8 + 1/16 + 1/32 + ...) from the latter, we still have (1/16 + 2/32 + ...) left over.

Continuing in this fashion, we end up subtracting off

(1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ...) + (1/4 + 1/8 + 1/16 + 1/32 + ...) + (1/8 + 1/16 + 1/32 + ...) + (1/16 + 1/32 + ...) + (1/32 + ...) + ...

and this converges to the main sum. And, from the exponent-0 result, we know this is just 1 + 1/2 + 1/4 + 1/8 + 1/16 + ...


Two-bit values are common in bioinformatics, and I’ve found the ability to efficiently convert between packed arrays of 1- and 2-bit values to be valuable in that domain.


One question: it is possible for the XOR of two consecutive floating-point numbers to have 32-63 leading zeros; the numbers 32-63 do not fit in 5 bits. I imagine this is treated by Gorilla like 31 leading zeros?


(2008)


Added. Thanks!


The mod 1000 is actually a consequence of the test format: all answers are integers in [0, 999], you fill in 3 digit-bubbles.


Suppose the value of a network to an individual user is proportional to the number of users. Then the total value of the network, summed across all its users, is proportional to the square of the number of users.

See also https://en.wikipedia.org/wiki/Network_effect .


Yup, and that's why I'd consider this a "101" essay. The larger exponential-growth trends (e.g. Moore's Law) practically always have a microstructure with many sigmoid curves. After you've encountered your first exponential, the "201" lesson about saturation becomes important.


And then generally all the smaller S-curves build on each other to make a curve that looks exponential for much longer, but ultimately turns out to be a sigmoid as well. One thing I muse on occasionally is whether the same will ultimately be true for technological progress as a whole.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: