Making data smaller, slower, etc. doesn't solve the problem. Good design and implementation are what it takes. Wirth's work shows simplifying the interfaces, implementation, and so on can certainly help. Personally, I think the best approach is simple, object-based storage at the lower layer with complicated functionality executing at a higher layer through an interface to it. Further, for reliability, several copies on different disks with regular integrity checks to detect and mitigate issues that build up over time. There are more complex, clustered filesystems that do a a lot more than that to protect data. They can be built similarly.
The trick is making the data change, problem detection, and recovery mechanisms simple. Then, each feature is a function that leverages that in a way that's easier to analyze. The features themselves can be implemented in a way that makes their own analysis easier. So on and so forth. Standard practice in rigorous, software engineering. Not quite applied to filesystems...
The trick is making the data change, problem detection, and recovery mechanisms simple. Then, each feature is a function that leverages that in a way that's easier to analyze. The features themselves can be implemented in a way that makes their own analysis easier. So on and so forth. Standard practice in rigorous, software engineering. Not quite applied to filesystems...