Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you for your opinion. Well... it did not just fail. Cryptsetup mounted everything fine, but the BTRFS tools did not find a valid filesystem on it.

While it could have been a bit flip that destroyed the whole encryption layer, BTRFS debugging revealed that there was some traces of BTRFS headers after mounting cryptsetup and some of the data on the decrypted partition was there...

This probably means the encryption layer was fine. The BTRFS part just could not be repaired or restored. The only explanation I have for this that something resulted in a dirty write, which destroyed the whole partition table, the backup partition table and since I used subvolumes and could not restore anything, most of the data.

Well, maybe it was my fault but since I'm using the exact same system with the same hardware right now (same NVMe SSD), I really doubt that.



> Well, maybe it was my fault but since I'm using the exact same system with the same hardware right now (same NVMe SSD), I really doubt that.

anecdotes could be exchanged in both directions: I run heavy data processing with max possible throughput on top of btrfs raid for 10 years already, and never had any data loss. I am absolutely certain if you expect data integrity while relying on single disk: it is your fault.


The reliability is about variety of workloads, not amount of data or throughput. It's easy to write a filesystem which works well in the ideal case, it's the bad or unusual traffic patters which cause problem. For all that I know maybe that btrfs complete failure was because of kernel crash caused by bad USB hardware. Or there was a cosmic ray hitting memory chip.

But you know who's fault is it? It's btrfs's one. Other filesystems don't lose entire volumes that easily.

Over time, I've abused ext4 (and ext3) in all sorts of ways: override random sector, mount twice (via loop so kernel's double-mount detector did not work), use bad SATA hardware which introduced bit errors.. There was some data loss, and sometimes I had to manually sort though tens of thousands of files in "lost+found" but I did not lose the entire filesystem.

I only saw the "entire partition loss" only happened to me when we tried btrfs. It was a part of ceph cluster so no actual data was lost.. but as you may guess we did not use btrfs ever again.


> but as you may guess we did not use btrfs ever again.

there are scenarious where btrfs is currently can't be replaced: high performance + data compression.


Sure, I can believe this. Does not change the fact that some people encounter compete data loss with it.

Sadly, there are people (and distributions) which recommend btrfs for general-purpose root filesystem, even for the cases where reliability matters much more than performance. I think that part is a mistake,


I would recommend btrfs as general purpose root filesystem. Any FS will have people encountering data loss. I can believe btrfs has N times higher chance of data loss because its packed with features and need to maintain various complicated indexes which are easier to corrupt, but I also believe that one should be ready that his disk will fail any minute regardless of FS, and do backup/replication accordingly.


While I did that and lost near to nothing, I still think that this should not be the default approach of developing a filesystem... it should be ready to restore as much as possible in case of hardware failure or data corruption.


there is standard approach: you setup raid, and FS will restore as much as possible and likely everything. Adding extra complexity to cover some edge cases maybe is too overkill.


OpenZFS does a better job here, at least if you can deal with an out of tree filesystem.


actually, my personal benchmarks and multiple accounts in internet say it is much slower than btrfs under the load.


For smaller disk setups possibly but with large enough scale ZFS ends up beating out btrfs.


I test on 2TB datasets. Do you have any specific pointers which would support your claim?


I have 100+TB datasets and with a large enough SSD/RAM for L1/L2 arc ZFS edges out.

Hell even the compression algorithm that ZFS has uses/has access to (LZ4) is faster than what btrfs uses and with enough IO that matters.


> I have 100+TB datasets and with a large enough SSD/RAM for L1/L2 arc ZFS edges out.

and your claim is that you tested it against btrfs on the same workload? Maybe you could post some specific numbers from running command from this thread? https://www.reddit.com/r/zfs/comments/1i3yjpt/very_poor_perf...

> Hell even the compression algorithm that ZFS has uses/has access to (LZ4) is faster than what btrfs uses and with enough IO that matters.

lz4 compression rate was 2x vs 7x for zstd on my data (bunch of numbers), so I didn't see point of uzing lz4 compression at all because benefits are not large enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: