> Open Image Spec ... As we continue to grow the contributor community to the Docker project, we wanted to encourage more work in the area around how Docker constructs images and their layers. As a start, we have documented how Docker currently builds and formats images and their configuration. Our hope is that these details allow contributors to better understand this critical facet of Docker as well as help contribute to future efforts to improve the image format. The v1 image specification can be found here: https://github.com/docker/docker/blob/master/image/spec/v1.m...
This is a great start, and I hope this doesn't sound negative, but this likely wouldn't be here if CoreOS hadn't shaken things up the way they did with ACI/Rocket.
> but this likely wouldn't be here if CoreOS hadn't shaken things up the way they did with ACI/Rocket.
It's not "likely", it's absolute.
The so-called Docker "Standard" was cooked up overnight the weekend ACI was released. Prior to ACI's release, Docker made active steps to prevent there being a unified standard... they were petrified that a unified image format standard would allow someone to swoop in and eat their lunch. Docker lived in a world where nobody else would dare make any competing container system.
It seems to have all started with this github issue[1]. Docker employees are initially interested in collaborating with CoreOS and helping to co-develop an open standard -- then Shykes shuts it all down.
They call Docker an implementation of their "Docker Standard", however it's really the other way around -- the "standard" was an afterthought -- and to that end it's unproven, untested and likely incomplete. There are no other implementations of their "standard" -- so the holes have not been found. ACI already has several working implementations, and many more in the oven at this moment. They have contributed immensely back to the open standard and helped evolve it into a community driven specification -- not dictated by any single organization or source.
Shykes doesn't "shuts it all down". And to be honest, I think he's got a pretty good point. If anything he encourages for the Docker format to be more documented:
> But here's a suggestion for you. If you were to complain that Docker's image format and runtime specification, as massively adopted as it is, is not appropriately documented, and it could be made easier to produce alternate implementations - then I would completely agree with you. In response, I would encourage the project maintainers to improve the specs documentation based on your suggestions. I would also encourage you to join the effort, and offer my help in the process.
> If anything he encourages for the Docker format to be more documented
This was after the Docker project deleted the original spec [1] in September 2013 -- ACI wasn't publically annoucned until December 1st 2014, and then Docker rolled out their "standard" in a hurry on December 7th 2014.
That's not a spec. I've got no side in this, but what you're saying has happened and what actually seems to have happened based on reading your links doesn't match up, in my opinion.
Got the same impression. Now what is needed is to split the docker daemon into little parts with least privileges.
An example of that, yesterday while trying to see if I could implement `docker build` using available commands, I found out that the docker daemon is shelling out to git on the server side if a remote url is given. It seems risky since the server is running with a lot of privileges that aren't needed for that task. https://github.com/docker/docker/blob/master/builder/job.go#...
> trying to see if I could implement `docker build` using available commands
This should be possible soon! The only command missing is a symmetric `docker cp`. I've got most of the implementation ready to be reviewed in a pull request [1] (closed now, but I will reopen it soon).
> The way you phrase that suggests that user experience is more important than security
Of course it is. Steve Yegge put it best:
> But I'll argue that Accessibility is actually more important than Security because dialing Accessibility to zero means you have no product at all, whereas dialing Security to zero can still get you a reasonably successful product such as the Playstation Network.
Ha ha. You can always layer security around how use use Docker, but without any usability what do you have? Chop the network cable and you get the tradeoff you are looking for.
Your response is terrifying. Comparing core, must-not-fail infrastructural systems the same in terms of security needs as a consumer-facing video game platform is, to me, very much misguided, and I think it betrays the difference in mindset between "product people" and "platform people": as a platform people, my stuff must not fucking break and vendors that build stuff that make things more likely to break are poisonous.
The continually lackadaisical approach to security and reliability (basic features around reliability being post-1.0 is unconscionably bad, and user privilege segmentation, as noted, still hasn't happened), in ways I can't patch around, is one of the reasons I'm honestly hoping for a Docker competitor to arise that would trade that growth-fuel "user experience" for security, reliability, and a less compromised ethos. (I am considering a move of my systems to OpenBSD, but in AWS that's difficult.)
This leads me to thinking about a larger problem in the current ecosystem: core infrastructure being tied to a VC-backed, growth-as-a-prerequisite startup (where you really can convince yourself that making it easy is more important than making it popular) scares me deeply, because building the right thing becomes less incentivized than building the popular thing. I hope this is not the way of the future, because things already work poorly enough and are insecure enough as it is. =/
I'm happy to address any specific concerns you have about security. It's an important topic for me.
In this particular comment I could only spot one specific criticism: the lack of built-in user segmentation, so I will talk about that. I agree that this would be a nice feature. But we decided to not rush it, and instead tell operators to rely on the underlying system features for authentication, segmentation etc. In practice that means:
* If you have an https auth infrastructure in production, drop the appropriate middleware in front of your docker daemon, and rely on that.
* If you rely on ssh keys and unix access control in production, keep the default configuration of listening on a unix socket, and use regular unix users to decide who gets to talk to the socket.
* If you run trusted payloads, or if you run untrusted payloads with acceptable mitigation in place (no root inside the container, apparmor/selinux, inter-container networking disabled etc.), then go ahead and pool all your machines into a single swarm.
* If you run untrusted payloads, then map each trust domain to an isolated group of underlying machines. This is what Amazon, Google and others do when running customer payloads on Docker for example.
It would be a nice feature to segment Docker API endpoints by user, so that different users have different views (and different levels of access) of the same underlying daemon. But that requires implementing an authentication and authorization layer, and it requires changing some aspects of the Docker API which imply privileged access to the system. For example, the 'docker run -v /foo:/bar'. This represents serious engineering work, and as much as I would like to make you happy tomorrow, I don't think you will be any happier if we ship an unfinished feature.
She predicts every major technology has a breaking point and turning point.
I can't see why the same isn't true for docker. Rapid adoption leads to growing pains, which leads to introspection, which leads to fixing of issues to create better product.
If you've been around the block, it's hard to see Rocket as competition. There is a lot of sunk cost already in Docker (Amazon, Google, Joyent, lots of startups), if it's not obvious to CoreOS already.. Docker will be the predominant way we package our applications for the next 5-10 years
> Docker will be the predominant way we package our applications for the next 5-10 years
That same effect will also drive a revolution in cloud infrastructure. I call the effect the problem cloud because it's a pain in the ass sometimes, just like a teenager.
The migration cost from Docker to Rocket will/would be trivial compared to the cost in getting things to work with Docker style containers in the first place.
I'm sure Docker will be important for a long time, but consider that it's still a bit player in application packaging and deployment - far more has been invested e.g. in AMI's for EC2, or tooling around VMWare just to mention two, and so it's by no means clear if it will manage to maintain its lead in what is still a tiny, new space.
Docker will be the predominant way we package our applications for the next 5-10 years
Disagree. I see their scope as too narrow, leading to a design that has far too many already evident growth problems to resist strong, broader-scoped competition.
This is a great start, and I hope this doesn't sound negative, but this likely wouldn't be here if CoreOS hadn't shaken things up the way they did with ACI/Rocket.