Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tagging Docker images for fun and profit (happyvalley.dev)
38 points by stepbeek on April 27, 2020 | hide | past | favorite | 25 comments


In GitLab CI, we solve this by using the pipeline number in the tag. They're always in order, and it is available as a variable anywhere in the pipeline.


We use `git rev-list --count HEAD` on master, which basically gives you the number of commits.


we do this in addition to the sha. best of both worlds!


That is a great idea!


I do this as well — "Upgrade from image b1257 to b1262". As a bonus, you can tell every tag that doesn't fit the b-then-a-build-number naming convention hasn't come from CI.


Oh, I need to start doing that, that's really clever.


I do the same thing with Azure DevOps, as pipelines there also have an incrementing integer ID


Neat! We're using gitlab too and I never thought about this.


It's $CI_PIPELINE_ID if you want to try it out.


I have just used calver (https://calver.org/) to keep it simple - and I use the date[ddmmyyyy]hour,min,second stamps of build time as a marker in the version. I have had to deal with the build basically being a recipe of taking multiple ingredients (libs, apps - each with their own lifecycles) and putting'em together in one image to get stuff running.

The image version in my case is a tracker of the last-known-good-set (of various ingredients) - as validated by the quick, pre-build tests.

I haven't had any major problems till now; may be I am just lucky to not have encountered the hassles w.r.t calver till now.

I stumbled upon one similar approach too, recently. https://worklifenotes.com/2020/02/27/automatic-version-incre...



Why not just use labels? How does labeling by date help improve order in the registry?


In all honesty I've never used labels before - thanks!


> It’s enabled us to have more confidence that our dev environments match production

How does Docker help you know your production code is like your development code? _Maybe_ for the code itself, but I highly doubt it, because you probably have different build targets for dev and production, different ports, networking, environment variables, asset serving, debugging, etc. And certainly the docker-compose, maybe Swarm, setup you run locally looks nothing like production? And of course your databases and other services are set up completely different locally.


Because the sandboxing various container stacks provide create a much clearer line of demarcation between what is the responsibility of your ops team and your developers. It makes it much harder to accidentally depend on the state of the world.

* There are two networking stacks. Developers only see and care about the "inner" stack while the ops team is free to do anything to the "outer" or real network. Devs never see that the ports are actually different in prod.

* All the "environment" has to be passed explicitly. You can't accidentally depend on variables, files, programs, libraries, that just happen to be on the host system. Simultaneously, the ops team is now free to configure and manage or change the host system in any way they see fit.

* Devs only care that they're deploying to a swarm not any of the details of how that swarm exists in prod.

* Devs see shared storage just appear in their containers. To them it doesn't matter in the slightest that it comes from NFS, Gluster or Ceph.

* Devs just see traffic hit their app. They don't know or care anything about our LB or caching setup.

* Logs and metrics are sent to magic addresses and names on the inner network that map to the real vms outside.

* Same with the DB, Redis, Memcache, Queues, etc..


> * devs see shared storage just appear in their containers. To them it doesn't matter in the slightest that it comes from NFS, Gluster or Ceph. > devs just see traffic hit their app. They don't know or care anything about our LB or caching setup. > [...] > * Same with the DB, Redis, Memcache, Queues, etc...

I am a bit scared by these devs only caring about themselves


So if by "dev environment" you mean locally on a developer's laptop, then sure. But shared dev/staging environments absolutely look like prod. CI builds an image, deploys it to staging (which is running the same docker-compose.yml or whatever as prod, just with URIs replaced to point at stage database, etc). If we like it, we hit the button to deploy that same image to prod. It's not perfectly identical, but stage and prod are pretty nearly the same.


I don't see why Docker does anything special here, with Terraform + EC2/AMIs/AWS for example we get the exact same environment parody.


This seems like another group standing up against semver. Nothing in that post is specific to Docker.

Also Git provides a `git describe` command, that builds a short identifier including a incrementing number and the commit hash (as well as the last tag). It's standard enough and used by a lot of people (e.g. Linux distros). It has the benefit of being instantly parsable by Git. What's wrong with that?


> While this doesn’t give us great granularity in terms of order of changes within a single day

Use hours, minutes and seconds?


I like being able to use the image in later pipeline steps without having to pass a variable around. This would be tough to do with time, but it's easy to do with date since we only really run those pipeline during work hours.


This will result in using the same tag for all builds that day - which defeats the purpose of a unique version to a large degree If you are planning on testing or deploying a specific version.


The tag also contains the commit id, not just the date.


Right - that would assume that the build does not pull in potentially different dependency [versions] even if the code itself doesn't change. This may or may not be the case.


Don't you need partial SHA anyway? This needs a variable.

And if you have a git checkout around, you can use git commit timestamp - time up to seconds and does not need anything except the source tree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: