Does someone know what the technology behind the tiering on QNAP NAS Systems is? I use an SSD RAID 1 in front of an RAID 10, which seems to work great.
IMHO flexible tiering rather than caching would be very nice for many Systems as it is rather difficult to teach users to separate rather stale data from changing data. Often does not have to be perfect.
Looked at Autotier before, but the development looked pretty stale (which is not too bad per se if it is stable). Is there any experience/ recommendations with putting bcachefs also on top of networked block storage such as CEPH ? As CEPH SSD caching seems pretty deprecated by default, at work we looked at a solution to marry our HDD and SSD pool for users that do want to put too much thoughts into tiering by mount points.
I used lvm cache to put ZFS on top of local NVMe for write-back caching of iSCSI targets, as ZFS has good built-in read caching.
Worked pretty well in the limited tests I did, but it's not magic. Main reason I didn't pursue is that it felt a bit like a house of cards. Though on the positive side, one could always mount the underlying storage, ie partitions serving the iSCSI targets, as a local pool on another machine.
You could use bcachefs on the OSD drives, but you can also just point the WAL/DB of the OSD to a partition on the ssd and have the data on hdd. You don't have to tier pools to get help with small writes/metadata using ssds.
IMHO flexible tiering rather than caching would be very nice for many Systems as it is rather difficult to teach users to separate rather stale data from changing data. Often does not have to be perfect.