Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I read this at the time and, frankly, my takeaway was more that:

> There are applications where 512b drives and 4K drives are not compatible; for example, in some ZFS pools you can't replace a 512b SSD with a 4K SSD

...the ZFS design was fundamentally fucked up. Intel have merely exposed a core design problem, because sooner or later you aren't going to be able to find 512 byte drives at all.



Zfs sector size (ashift parameter) is set at vdev creation time. That is, when you get a bunch of drives together, and you want to have some redundancy or striping, you create a vdev that is composed of several drives, for your desired mode of redundancy (and / or striping). A pool is composed of multiple vdevs; zfs file systems all allocate from a common pool. So it's generally only a problem if you are replacing an existing drive in a vdev after it failed.

Zfs doesn't support a bunch of things. It has no defragmentation. Filling a zfs pool much north of 90% tends to kill its performance even after you delete stuff to bring it back down again. The usual answer to these things is "wipe the pool and restore from a backup", or "zfs send <snapshot> | zfs receive <filesystem>". The answer to changing the sector size of a vdev is similar, just like it is for removing a disk from a vdev, or reconfiguring your redundancy in most cases.

This is just how zfs is currently implemented. It was designed for Sun's customers, for whom having backup for the whole pool, or having a whole second pool to stream to, is not a big deal. Using it in a home or small business context consequently requires more care and forethought.


> Zfs doesn't support a bunch of things.

I am well aware of this, having been running production systems with it since 2008, shortly after it stopped silently and irretrievably corrupting data.

> It was designed for Sun's customers, for whom having backup for the whole pool, or having a whole second pool to stream to, is not a big deal.

The idea that I have to destroy and re-create pools for so many no especially uncommon events is one that runs pretty counter to the way ZFS generally does a good job of being an enterprise filesystem. "Throw it away and restore from backup" is not a good answer.


> "Throw it away and restore from backup" is not a good answer.

Honestly, when you think about the life cycle of many storage systems, it is pretty reasonable. Once the drives get to a certain age, you tend to have to replace them anyway, and after the array is beyond a certain age, you want to replace the whole thing.

It makes a certain sick sense to expect a lot of enterprise customers to have a strategy for fail over to a new storage pool.


I believe 80% is where it switches allocation strategies by default. Ideally you don't want to cross that threshold.


If you know you can't get suitable replacements, it's an easy problem - copy the data to a new pool and retire the old one. In this case, since the part number didn't change, there was no way to know.

Sector size is a pretty fundamental property of a disk drive.


He mentions it's only in some pool configurations. I imagine there might be configurations where 512b or 4k makes a difference and isn't compatible.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: