I'm with you on that. Obviously lol. There's two paths to getting the complex functionality without the problems: filesystem over object model; application layer over filesystem model.
In the 80's-90's, many systems aiming for better robustness or management realized filesystems were too complex. So, they instead implemented storage as objects that were written to disks. Many aspects important for security or reliability were done here on this simple representation. The filesystem was a less privileged component that translated the complexities of file access into procedure calls on the simpler, object storage. Apps that didn't need files per se might also call the object storage directly. Some designs put the object storage management directly on the disk with on-disk, cheap CPU. Supported integrated crypto, defrag, etc. NASD's [1] and IBM System/38 [2] are sophisticated examples in this category.
The other model was building complex filsystems on simpler ones. The clustered filesystems in supercomputing and distributed stores in cloud + academia are good examples of this. The underlying filesystem can be something simple, proven, and highly performing. Then, the more complex one is layered over one or more nodes to provide more while mostly focusing on high-level concerns. Google File System [3] and Sector [4] are good examples.
So, we can have the benefits of simple storage and complex filesystems with few of their problems. That there's many deployed in production should reduce skepticism that it sounds too good to be true. Now, we just need more efforts in these two categories to make things even better. Nice as ZFS and BTFS look, I'd rather they had just improved XFS in directions of these categories instead. Duplicated effort would've led to innovation instead on top of any innovations they produced.
In the 80's-90's, many systems aiming for better robustness or management realized filesystems were too complex. So, they instead implemented storage as objects that were written to disks. Many aspects important for security or reliability were done here on this simple representation. The filesystem was a less privileged component that translated the complexities of file access into procedure calls on the simpler, object storage. Apps that didn't need files per se might also call the object storage directly. Some designs put the object storage management directly on the disk with on-disk, cheap CPU. Supported integrated crypto, defrag, etc. NASD's [1] and IBM System/38 [2] are sophisticated examples in this category.
The other model was building complex filsystems on simpler ones. The clustered filesystems in supercomputing and distributed stores in cloud + academia are good examples of this. The underlying filesystem can be something simple, proven, and highly performing. Then, the more complex one is layered over one or more nodes to provide more while mostly focusing on high-level concerns. Google File System [3] and Sector [4] are good examples.
So, we can have the benefits of simple storage and complex filesystems with few of their problems. That there's many deployed in production should reduce skepticism that it sounds too good to be true. Now, we just need more efforts in these two categories to make things even better. Nice as ZFS and BTFS look, I'd rather they had just improved XFS in directions of these categories instead. Duplicated effort would've led to innovation instead on top of any innovations they produced.
[1] http://www.pdl.cmu.edu/PDL-FTP/NASD/CMU-CS-97-185.pdf
[2] http://homes.cs.washington.edu/~levy/capabook/index.html
[3] https://en.wikipedia.org/wiki/Google_File_System
[4] http://sector.sourceforge.net/