Note that I said "Docker container". And that's an oversimplification, but from the article:
> Docker containers have been deliberately designed to contain just one application. One container - one Nginx; one container - one Python web server; one container - one daemon. The lifecycle of a container would be bound to the lifecycle of that application. And running an init process like systemd as a top-level entrypoint was specifically discouraged.
My guess is that the goal of this is to leave "lifecycle" issues at the container boundary. If you put nginx and your golang webapp in the same container, and one of them crashes, Docker / kubernetes you have to have logic inside the container to detect that and restart the correct one. If the goal is to have Docker / Kubernetes is to take care of all that for you, making it one thing per container is the best option.
> If you put nginx and your golang webapp in the same container, and one of them crashes, Docker / kubernetes you have to have logic inside the container to detect that and restart the correct one.
If the proxy only serves the app you can just shrug and restart the container. If the proxy and the web app are both stateless it shouldn’t matter.
What you don’t want to do is mix stateless processes with statefull services, or have intertwined dependencies.
I'd rather use infrastructure to package orthogonal concerns than to figure out how to shoehorn all my concerns into each of many container builds. Keep the containers simpler, no?
This also makes containers more dev friendly, dev accessible.
Sure you can. The notion that “one container, one process” is the biggest misstep of containerization.