Could people who use Docker in production comment on the stability state of the storage drivers?
One thing that frustrates me about Docker is that no matter which storage driver I use, problems always seems to arise.
With AUFS on Debian, I constantly had problems with starting and stopping containers (problems with mounting and umounting AUFS layers). This was on Debian Wheezy and Docker 1.[1-3] era. I don't know it is improved now.
I tried BTRFS on Debian Jessie and Arch Linux, and found it to be much slower than AUFS on every aspect (creating images, starting containers, etc). The fact that BTRFS don't share inodes caches between subvolumes may be the culprit of slower times to start applications inside containers, when there is lots of containers running from the same base image.
Next, I stumbled on Direct LVM on Debian Jessie. Initially I liked that, on "personal benchmarks", the container creation time was very constant when the number of images increased (contrary to BTRFS, that increase the time to create subvolumes when the number of images increase). But.. In a few days using this storage driver, errors start to appear about problems to creating containers, corrupted LVM thin metadata, etc.
My last try was overlay on Debian Jessie with 4.5.1 kernel on backports. Apparently there is a bug on this specific kernel version that block containers for being started...
So. In my experience, BTRFS was the most stable storage driver, besides being much more slower than AUFS and Direct LVM (on XFS).
One thing that frustrates me about Docker is that no matter which storage driver I use, problems always seems to arise.
With AUFS on Debian, I constantly had problems with starting and stopping containers (problems with mounting and umounting AUFS layers). This was on Debian Wheezy and Docker 1.[1-3] era. I don't know it is improved now.
I tried BTRFS on Debian Jessie and Arch Linux, and found it to be much slower than AUFS on every aspect (creating images, starting containers, etc). The fact that BTRFS don't share inodes caches between subvolumes may be the culprit of slower times to start applications inside containers, when there is lots of containers running from the same base image.
Next, I stumbled on Direct LVM on Debian Jessie. Initially I liked that, on "personal benchmarks", the container creation time was very constant when the number of images increased (contrary to BTRFS, that increase the time to create subvolumes when the number of images increase). But.. In a few days using this storage driver, errors start to appear about problems to creating containers, corrupted LVM thin metadata, etc.
My last try was overlay on Debian Jessie with 4.5.1 kernel on backports. Apparently there is a bug on this specific kernel version that block containers for being started...
So. In my experience, BTRFS was the most stable storage driver, besides being much more slower than AUFS and Direct LVM (on XFS).