Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sadly, the state of things, be it the Docker ecosystem or others, "ready for production" means something much different than it did years ago.

For me, the definition of ready for production, Debian is a good example of the opposite end of Docker.



I think by 'production', they mean 'ready for general use on developer laptops'. No one in their right mind is deploying actual production software on Docker, on OS X/Windows.

I've been using it on my laptop daily for a month or two now, and it's been great. Certainly much better than the old Virtualbox setup.


>No one in their right mind is deploying actual production software on Docker, on OS X/Windows.

Since the whole point of Docker would be to deploy these in production and not just for development, I don't see how the term 'ready for production' can be used. Isn't this just a beta?


I doubt the problems mentioned happen on Linux or CoreOS, which is likely what a production environment will run on.


> Linux or CoreOS

Well, now I'm confused


Sorry, CoreOS is Linux as well, but in my mind it's enough of a hyper-specialised immutable auto-updatable container-specific version of Linux that it warrants a separate category when talking about Docker.


Docker for Windows is to isolate Windows software.

It's not a tool to test Linux containers on Windows.

The deployment target for Docker containers for Windows will be a Windows OS.


Sadly, no, they're using the name "Docker for Windows" to refer to the Docker-on-Linux-in-a-VM-on-Windows version.

Real native Windows containers and a Docker shim to manage them are coming: [1] but not released yet.

[1] https://msdn.microsoft.com/en-us/virtualization/windowsconta...


I don't think so. That's what Jeffery Snover is working on in Server 2016 with Windows nano server.

Unless something has changed since the last time I checked, The WindowsServerCore docker image was not generally available yet and requires server2016 (I think it was TP6 the last time I checked)

Docker, to my knowledge, is still exclusively Linux flavors. (Though I'm happy to be corrected if someone knows more than me)


Docker images still aren't generally available, but you can now run Windows Container Images based on the NanoServer docker image (and WindowsServerCore image if you replace nanoserver with windowsservercore in their image URL in the docs below) on Windows 10 (insiders build)[0].

[0]: https://msdn.microsoft.com/en-us/virtualization/windowsconta...


I went wide-eyed about three or four times while reading those instructions!

Super exciting! Thanks for the comment.


I am almost positive that is completely incorrect. Can you give any example of Docker being used to isolate Windows software?


You're right. I was wrong about this


you would use kubernetes, dc/os, swarm mode for aws, etc for that. Containers are portable.. nobody is launching a windows vm and doing a "docker run" for their production env


The fact that I can have Bash up and running in any distro I feel like within minutes blows my friggin mind. Docker is the stuff of the future. We were considering moving our development environment to Docker for some Fun, but we're still holding off until it is more stable and speedy.


I'm still using VirtualBox. Could you elaborate why Docker is better?


Leaving containers vs VMs aside, docker for Mac leverages a custom hypervisor rather than VirtualBox. My overall experience with it is that it is more performant (generally), plays better with the system clock and power management, and is otherwise less cumbersome than VirtualBox. They are just getting started, but getting rid of VirtualBox is the big winner for me.


It's based on the OS X sandbox and xhyve which is in turn is based on bhyve https://blog.docker.com/2016/03/docker-for-mac-windows-beta/


Thanks!


When I used VirtualBox for Docker (using Docker machine/toolbox), I would run out of VM space, have to start and stop the VM, and it was just clunky all around.

Docker.app has a very nice tray menu, I don't know or care anything about the VM it's running on, and generally is just better integrated to OS X. For instance, when I run a container, the port mapping will be on localhost rather than on some internal IP that I would always forget.


I don't think he was comparing Docker to VirtualBox.

In Docker 1.11 they used VirtualBox to host a Linux Docker Image to run containers. In 1.12 they switched to Microsoft's Hyper-V.


On the other hand I find my old setup with VMware much more reliable and performant. And I can continue to use the great tools to manage the VM instead of being limited to what docker provides. Some advanced network configuration is simply impossible in docker's VM.


I'm pretty sure they don't mean that, or they would have said that it was still in Beta.


This isn't a product that's "ready for production"; it's a product company declaring that it is.

This means what it's always meant: that the company believes the sum they'll make by convincing people it's "production ready" is greater than the sum they'll lose from people realizing it isn't.

Keep in mind the optimal state of affairs for Docker Inc. is one where everyone is using Docker and everyone requires an enterprise contract to have it work.


So misinformed. Docker for mac and docker for windows are not targeting production. They are designed for local dev envs


So why call it "production ready"?


I agree that it is confusing. Production ready in the sense that it is stable for the targeted use-case: local development environments. Not for "production". Damn now i'm confused...


GA would probably be a more appropriate description.


Well, it was beta before.


Exactly. "Ready for production" and "industrial" are constantly abused. All these tools are awesome and we use them, but PROPERLY deploying and supporting them in production is far from painless (or easy).


I think many view "ready for production" as a sign of what they do have in place is stable enough and support options are available so that it ticks all the CTO/CEO boxes in business plans.

Which basicly gets down to when your CTO/CEO or some manager comes in preaching docker - we should be doing that, why arn't we has one less argument to dismiss it now than before.

Yes many aspects need improving but case of what is there is deemed to of gained enough run-time in environments to be deemed stable enough to say, we can support this in production for off the shelf usage without you needing lots of grey-bearded wizards to glue it all in place and keep it that way.


I'm not completely disagreeing with you but Debian in recent years has taken massive steps backwards as far as production stability. Jessie for example did not ship with SELinux enabled which was a key deliverable for Jessie to be classed as stable / ready for production, what's worse is it doesn't ship with the require SELinux policies - again another requirement before it was to be marked as stable, it's filled with out of date packages (you know they're old when they're behind RHEL/CentOS!) and they settled on probably the worst 3.x kernel they could have.


You've given one example; SELinux. Did wheezy ship with SELinux enabled? No. So how is that a step backwards? It would have been a step backwards if they shipped with it enabled and it was half-assed. SELinux is notoriously hard to get right across the board. See how many Fedora solutions start with "turn off SELinux." Shipping jessie without SELinux enabled was the right thing to do, if the alternative was: not shipping jessie; or shipping borked jessie with borked SELinux support on by default. Those who know what they are doing can turn it on with all that entails.

You gripe about kernel 3.16 LTS but provide no support for your statement. With a cursory search I can't find any. If it was such a big deal I have to assume I would. For my part I use Jessie on the desktop and server and have not encountered these mysterious kernel problems of which you complain. Again, you may have wished for some reason that they shipped with 3.18 or 4.x, but they shipped. They have 10 official ports and 20K+ packages to deal with, I'm sorry they didn't release with your pet kernel version. Again, those who know what they are doing can upgrade jessie's kernel themselves if they are wedded to the new features.

So, massive steps backwards?


Unfortunately, nobody has stepped for SELinux maintainance. If this is important for you, you should help to maintain those policies.

All your remaining points are vague at best.


Oh believe me, we did try to contribute to Debian, in recent years the community has aged poorly and become toxic and hostile, where the Redhat / CentOS community has grown, is more helpful and we have found them to be more accepting of people offering their time than ever.


Most people I have spoken to about this say exactly the opposite. In 2014, the project even ratified a Code of Conduct [0].

The only major contentious issue I can recall was the systemd-as-default-init discussion, but that was expected.

[0] https://www.debian.org/code_of_conduct


I genuinely don't know about what toxicity and hostility you are speaking of. Any pointer?


It's amazing to me that a tool I use to prove that our stuff is ready for production is having such a hard time achieving the same thing.


Do you run your containers in production with "docker run" ??


Only for a tiny pet project.

The sales pitch I usually give people is that any ops person can read a Dockerfile, but most devs can't figure out or help with vagrant or chef scripts.

But it's a hell of a lot easier to get and keep repeatable builds and integration tests working if the devs and the build system are using docker images.


You are doing it wrong then. People run containers in production using orchestration platforms, like ECS, kubernetes, mesos etc. The docker for mac/windows are not designed to serve containers in production environments.

They help you build and run containers locally, but when it comes time to deploy you send the container image to those other platforms.

Using docker like that is like running a production rails app with "rails s"


And how do you solve all of the security problems and over-large layer issues that the Docker team has been punting on for the last 2 years?


Which security problems are you referring to? Our containers run web applications, we aren't giving users shell access and asking them to try and break out.

Over large layers: Don't run bloated images with all your build tools. Run lightweight base images like alpine with only your deployment artifact. You also shouldn't be writing to the filesystem, they are designed to be stateless.


Credentials capture in layers. Environment variable oversharing between peer containers (depending on tool).

And the fact that nobody involved in Docker is old enough to remember that half of the exploits against CGI involved exposing environment variables, not modifying them.


With kubernetes, putting credentials in env vars is an anti pattern.

You create a secret and then that secret can be mounted as a volume when the container runs, it never gets captured in a layer.

Also CGI exploits exposing env vars would work just as well on a normal non-container instance would they not?


Two separate issues.

Yes, you can capture runtime secrets in your layers, but it's pretty obvious to everyone when you're doing that and usually people clue in pretty quickly that this isn't going to work.

Build time secrets are a whole other kettle of fish and a big unsolved problem that the Docker team doesn't seem to want to own. If you have a proxy or a module repository (eg, Artifactory) with authentication you're basically screwed.

If you only had to deal with production issues there are a few obvious ways to fix this, like changing the order of your image builds to do more work prior to building your image (eg, in your project's build scripts), but then you have a situation where your build-compile-deploy-test cycle is terrible.

Which would also be pretty easy to fix if Docker weren't so opinionated about symbolic links and volumes. So at the end of the day you have security-minded folks closing tickets to fix these problems one way, and you have a different set that won't provide security concessions in the name of repeatability (which might be understandable if one of their own hadn't so famously asserted the opposite http://nathanleclaire.com/blog/2014/09/29/the-dockerfile-is-... )

I like Docker, but I now understand why the CoreOS guys split off and started building their own tools, like rkt. It's too bad their stuff is such an ergonomics disaster. Feature bingo isn't why Docker is popular. It's because it's stupid simple to start using it.


Regarding secrets in builds, I think a long term goal would be to grow the number of ways of building Docker images (beyond just Docker build), and to make image builds more composable and more flexible.

One example is the work we've experimented with in OpenShift to implement Dockerfile build outside of the Docker daemon with https://github.com/openshift/imagebuilder. That uses a single container and Docker API invocations to execute an entire Dockerfile in a container, and also implements a secret-mount function. Eventually, we'd like to support runC execution directly, or other systems like rkt or chroot.

I think many solutions like this are percolating out there, but it has taken time for people to have a direct enough need to invest.


>> Debian is a good example of the opposite end of Docker.

It is not fair to compare Docker with Debian. Docker Inc (who backed Docker) is a for-profit corporation and is backed by investors. It is understandable why they need to push their products into production the soonest time possible.


I use Docker a lot. I also use things like Docker volume plugins and have had to modify code due to API changes/breakages.

"Production ready" in the "container space" for me are Solaris Zones, FreeBSD Jails, and to an extent lxc (it's stable, but I've used it less). I like what Docker/Mesos/etc. bring to the table, but when working with the ecosystem, it takes work to stay on top of what is going on.

It is even harder to consult with a customer or company interested in containers and give the most accurate near/long term option. It becomes a discussion in understanding their application, what approach works now, and guidance for what they should consider down the road.

Networking and Storage are two areas with a lot of churn currently.


What does it matter how fair it is? It's neither fair to compare a monkey to a fish in terms of being able to climb trees, but that doesn't change that one of the two is most likely already sitting on a branch. And ultimately, if you need something that can climb trees, a fish simply won't do, no matter how fair you try to treat it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: