--adapt[=min=#,max=#]
zstd will dynamically adapt compression level to perceived I/O conditions. Compression level adaptation can be observed live by using command -v. Adaptation can be constrained between supplied
min and max levels. The feature works when combined with multi-threading and --long mode. It does not work with --single-thread. It sets window size to 8 MB by default (can be changed manu‐
ally, see wlog). Due to the chaotic nature of dynamic adaptation, compressed result is not reproducible.
I really should have read the documentation! That feature looks awesome, but in a quick test it could only use about 50% of the available output bandwidth. My upload speed is 50 Mbps, but zstd could only send about 25 Mbps.
Similarly, on a local speed test (SSD -> SSD), using a fixed compression level was much faster than --adapt.
"" note : at the time of this writing, --adapt can remain stuck at low speed when combined with multiple worker threads (>=2). ""
There are some ADVANCED COMPRESSION OPTIONS --zstd tunables that might help.
Leave wlog alone unless you're willing to store the value out of band and pass it in again during decompression.
hashLog, bigger number uses more memory to compress but is often faster.
chainLog smaller number compresses faster, but worse ratio.
In your use case monitoring general system utilization to identify bottlenecks might also help. My gut instinct is that you might already have hit a memory bandwidth limit for the platform, at which point REDUCING the hashLog until it fits within your intended performance budget might yield better bandwidth results. Reducing the chainLog value might have the same effect.
if you're running your test over the internet [ fluctuating latency, some packet losses ] - try enabling BBR [1] tcp congestion control algorithm on the sender side to utilize the available bandwidth more efficiently.
This very much depends on your use case. I recently did the planet for zoom level 12 as I mentioned elsewhere on this article. I can go into details if you're interested.
Having JUST done this for zoom level 12 for the planet it's tedious. It took 9 days, to export to mbtiles, another 2 days going through pngcrush, and about 40 minutes to make it into a squashfs resulting in a 2.6G squashfs file which houses the 22,368,460 PNG files which are then served up directly via nginx in our case.
It would be cool if somebody could crowd-source an EBS volume (or some other block storage) template of this that anybody else could easily attach to their cloud instance.
I thought this too. I invested some time because tmux is shipped by default in OpenBSD and thus I don't have to install screen on my firewalls. I even submitted a patch to tmux to remove all the bindings that I didn't care about so in that early transitional period when was using the help screen often, only the commands I actually use come up.
I REALLY like that the hardstatus line equivalent shows the command that's currently executing (think: bash, make, ./configure etc). That said, if you don't use the hardstatus line, I don't think there's much of a reason to switch unless you want multiple different size panes which screen doesn't really support.
For something like developer testing against frozen snapshots of data, you don't have the CRUD aspects, so turn fsync off (fsync = off in postgresql.conf). You'll note a large speed increase when you're not waiting on the disk to confirm transactions.
Except most people think: iPhone, iPhone 3G, iPhone 3GS, and iPhone 4. There's an argument to be made that the iPhone 4 is the 4th generation of iPhones.
I just booked a trip to Key West, FL recently. I used KAYAK and throughout the entire process, I was thinking, there must be a better way. What took me nearly two hours a few weeks ago, would have taken me mere minutes on this site. I will not forget this site.