Docker containers let you isolate the entire environment for your app. Let's say your running an app on CoreOS in a container that needs python 1.2.3.
On your laptop you can build and test the new version of the app that needs python 1.2.4. Once you decide that's ready to go, you can push the new container onto the same CoreOS machine, so it's running both containers. Without the containers, running two versions of python on the same box isn't possible. If you had a chef script that updated to 1.2.4, you'd possibly break every other app on the box.
Containers also let you do some cool things like sign and verify a container before it's launched on the box. It should be bit for bit the same on your laptop as it is on the remote machine. Containers also boot within seconds, much faster than a VM. There have been a few tech demos running around that actually spin up a new container with a web server to service every web request, just to show how fast you can boot them. 300ms is pretty long for a web request, but it's the idea that counts.
Thanks, this is good info. For NodeJS and Rails, I use nvm and rvm, so didn't have any problem with multiple environments. But yeah, I see your point that Docker can help in such scenario or when there's no equivalent of nvm/rvm.
Have a dependency on conflicting libc is a favourite problem that can be difficult to solve without some form of container (vm, chroot or something "in between" like lxc/jails). Another is dependency on different kernel (either major version on same os, or dependency on a different kernel, like freebsd), which docker (by design) doesn't solve.
I don't use ruby much, but it doesn't strike me as very easy to work with/very reliable for production deployment. But that might be me. How well does it handle dependencies on conflicting modules with parts written in c?
Perhaps the most important point is that (when it makes sense) a docker setup might allow easier horizontal scale-out, and or redundancy.
All that said, keeping things simple is generally a good thing. But sometimes adding complexity in one area makes the overall system less complex.
On your laptop you can build and test the new version of the app that needs python 1.2.4. Once you decide that's ready to go, you can push the new container onto the same CoreOS machine, so it's running both containers. Without the containers, running two versions of python on the same box isn't possible. If you had a chef script that updated to 1.2.4, you'd possibly break every other app on the box.
Containers also let you do some cool things like sign and verify a container before it's launched on the box. It should be bit for bit the same on your laptop as it is on the remote machine. Containers also boot within seconds, much faster than a VM. There have been a few tech demos running around that actually spin up a new container with a web server to service every web request, just to show how fast you can boot them. 300ms is pretty long for a web request, but it's the idea that counts.