In GitLab CI, we solve this by using the pipeline number in the tag. They're always in order, and it is available as a variable anywhere in the pipeline.
I do this as well — "Upgrade from image b1257 to b1262". As a bonus, you can tell every tag that doesn't fit the b-then-a-build-number naming convention hasn't come from CI.
I have just used calver (https://calver.org/) to keep it simple - and I use the date[ddmmyyyy]hour,min,second stamps of build time as a marker in the version. I have had to deal with the build basically being a recipe of taking multiple ingredients (libs, apps - each with their own lifecycles) and putting'em together in one image to get stuff running.
The image version in my case is a tracker of the last-known-good-set (of various ingredients) - as validated by the quick, pre-build tests.
I haven't had any major problems till now; may be I am just lucky to not have encountered the hassles w.r.t calver till now.
> It’s enabled us to have more confidence that our dev environments match production
How does Docker help you know your production code is like your development code? _Maybe_ for the code itself, but I highly doubt it, because you probably have different build targets for dev and production, different ports, networking, environment variables, asset serving, debugging, etc. And certainly the docker-compose, maybe Swarm, setup you run locally looks nothing like production? And of course your databases and other services are set up completely different locally.
Because the sandboxing various container stacks provide create a much clearer line of demarcation between what is the responsibility of your ops team and your developers. It makes it much harder to accidentally depend on the state of the world.
* There are two networking stacks. Developers only see and care about the "inner" stack while the ops team is free to do anything to the "outer" or real network. Devs never see that the ports are actually different in prod.
* All the "environment" has to be passed explicitly. You can't accidentally depend on variables, files, programs, libraries, that just happen to be on the host system. Simultaneously, the ops team is now free to configure and manage or change the host system in any way they see fit.
* Devs only care that they're deploying to a swarm not any of the details of how that swarm exists in prod.
* Devs see shared storage just appear in their containers. To them it doesn't matter in the slightest that it comes from NFS, Gluster or Ceph.
* Devs just see traffic hit their app. They don't know or care anything about our LB or caching setup.
* Logs and metrics are sent to magic addresses and names on the inner network that map to the real vms outside.
* Same with the DB, Redis, Memcache, Queues, etc..
> * devs see shared storage just appear in their containers. To them it doesn't matter in the slightest that it comes from NFS, Gluster or Ceph.
> devs just see traffic hit their app. They don't know or care anything about our LB or caching setup.
> [...]
> * Same with the DB, Redis, Memcache, Queues, etc...
I am a bit scared by these devs only caring about themselves
So if by "dev environment" you mean locally on a developer's laptop, then sure. But shared dev/staging environments absolutely look like prod. CI builds an image, deploys it to staging (which is running the same docker-compose.yml or whatever as prod, just with URIs replaced to point at stage database, etc). If we like it, we hit the button to deploy that same image to prod. It's not perfectly identical, but stage and prod are pretty nearly the same.
This seems like another group standing up against semver. Nothing in that post is specific to Docker.
Also Git provides a `git describe` command, that builds a short identifier including a incrementing number and the commit hash (as well as the last tag). It's standard enough and used by a lot of people (e.g. Linux distros). It has the benefit of being instantly parsable by Git. What's wrong with that?
I like being able to use the image in later pipeline steps without having to pass a variable around. This would be tough to do with time, but it's easy to do with date since we only really run those pipeline during work hours.
This will result in using the same tag for all builds that day - which defeats the purpose of a unique version to a large degree If you are planning on testing or deploying a specific version.
Right - that would assume that the build does not pull in potentially different dependency [versions] even if the code itself doesn't change. This may or may not be the case.