It's not that it's any different, it's that it's standardized. The idea is that a Docker container would be portable between different PaaS hosts (and from your own staging environment to those hosts!) without rebuilding, because they'd all be using the "Docker standard for deployment."
A PaaS host saying they supported Docker would imply that they'd be using, for example, SquashFS for container format, AuFS instead of OverlayFS for union-mounts, LXC instead of OpenVZ/Xen/KVM for isolation, and any other set of things your container might subtly rely upon.
The culmination of this, I imagine, would be a PaaS host allowing you to specify the "stuff" you want to run just by the URL of the container-image.
Doesn't a standard involve, you know, standards? AFAIK a product name is not a standard.
What if the namespace changes? What if AuFS changes? What if LXC changes? Independently or all together? ABI changes? Version changes? Feature changes? Are all the licenses compatible? Will it ever support platforms other than just certain versions of Linux? Or languages other than Go?
I don't see a standard. I see marketing for a product and a mailing list to collect potential customers. But maybe i'm missing something.
Hi Peter, this website was only meant to be seen once Docker is actually open-source, which will be the case very soon.
I do think there is a need for a standard way to package and share software at the filesystem and process level - we don't pretend to define that standard, but hopefully we can contribute to it by open-sourcing a real-world implementation.
I guess I read too far into it when I saw the word "standard" everywhere and got excited - sorry about that. Do you plan on adding to your implementation the ability to differentiate between compatible versions/platforms, so one could use this on several cloud instances that aren't built the same?
1. every one of those attributes would be fixed against a given version of the (coming) Docker spec, and a given host would specify what version(s) of the spec they were compatible with.
2. Go is, I think, just the language the glue code is written in; not the language your own things-deployed-using-Docker must be written in.
3. It might support other Linux distros (Fedora, probably), but it won't support other OSes as hosts--because the whole point is to run things that need a POSIX-alike as their "outer runtime" (i.e. not Windows programs, etc.) The way to run these containers on another host will be to run Linux in a VM on that host, and run the containers in the VM--just like the way to play a Super Nintendo game "container" on your computer is to run them in a Super Nintendo VM. [Actually, come to think of it, game ROMs are a great analogy for precompiled SquashFS containers. I would adopt it if I were them :)]
The trick here is that their xmame (Docker) may not be the same build on all hosts, so it may not play the ROMs all in the same way or support all ROMs. A standard works to improve interoperability between different builds/hosts/etc as well as provide an expected set of operations and their results. If all they provide is just one version of one product and call that standardized, that's like releasing a new version of Internet Explorer and calling it a web standard.
Well, this is a good first step in the "free market" standardization process, though: get a public implementation out of what you would imagine standard-conformance to look like. Then, let the other guys (e.g. Heroku) get out their competing implementations. Then, find the similarities, resolve the differences, and write it down. Now you've got a standard.
In practice that does not work. Things get broken, people end up having to support 20 edge cases to use this "universal", "standardized" thing. Depends on the implementation, though.
"HTML 1.0" was the particular standard I had in mind. I guess I'm too used to coding multiplatform Javascript, but "end[ing] up having to support 20 edge cases to use this 'universal', 'standardized' thing" sounds like success in my books--in that you now have a (painfully) interoperating ecosystem, where before you had none. And it all gradually gets smoothed out as the spec evolves over the years, until you can't really tell the difference from a BDUF spec.
Correct. Docker is the direct result of dotCloud's experience running hundreds of thousands of containers in production over the last 2 years. We tried very hard to put it in a form factor which makes it useful beyond the traditional PaaS.
We think Docker's API is a fundamental building block for running any process on the server.
This might work if you only have to run three or four VMs on a box, and run several applications in each container. Full PC virtual machines are much too heavyweight, though, for isolating thousands of individual processes per box, especially when most of them might just sit there doing nothing most of the time.
Though! If you want to, you can think of this standard as specifying an "ABI format" for high-level, lightweight VMs that happens to run on a "Linux machine" instead of, say, an "IA32 machine."
If you're doing this as a PaaS or for CI, you do this as part of your new image creation and then pass in the new qcow2 to your vm (maybe via libvirt). If you aren't doing this or something very similar, you're spinning your wheels and wasting time/resources.
Other benefits: docker images are basically tarballs, which means they are much smaller.
And, importantly, Docker maintains a filesystem-level diff between versions of an image, and only needs to transmit each diff once. So you get tremendous bandwidth savings when transmitting multiple images created from the same base.