Then I would download all current versions of the images you use within your org and push them up to said registry.
It’s not a perfect solution, but you’ll be able to pull the images if they disappear and considering this will take only a few minutes to set up somewhere, could be a life saver.
As well, I should note that most cloud providers also have a container registry service you can use instead of this. We use the google one to back up vital images to in case Docker Hub were to have issues.
Is this a massive pain in the butt? Yup! But it sure beats failed deploys! Good luck out there!
I would not recommend doing it through Docker, though, especially after this change. We use AWS's ECR, and you can set it to do pull-through caching of public images, so images you've already used will stick around even if Docker blows up, and you don't have to pull the images yourself, you just point everything in your environment to ECR and ~~ECR will pull from docker hub~~ (EDIT: it only supports quay.io, not docker hub) and start building its cache as you use the images.
Knowing ECR has pull through caching is really helpful. I'm sure we would have come across that in the course of investigating our response, but this definitely saved us some time!
Edit: Damn, looks like ECR's pull through caching only works for ECR Public and Quay? It's a little unclear, but maybe not a drop in solution for Docker Hub replacement.
As someone who maintains the registries we use globally at work, +1.
I know people groan at running infrastructure, but the registry software is really well documented and flexible.
If you don't need to 'push', but only pull - configuring them as pull through caches is nice for availability and reliability -- while also saving from nickle/diming.
They will get things from a configurable upstream, proxy.remoteurl.
Contrary to what the documentation says, this can work with anything speaking the API. Not just Dockerhub.
edit: My one criticism, it's not good from an HTTPS hardening perspective. It's functional, but audits find non-issues.
You'll want nginx or something in front to ensure good HSTS header coverage for non-actionable requests, for example.
All good points but while this saves you from the docker images disappearing it does nothing to solve the issue of those images no longer receiving important security updates and bug fixes going forward.
That's good to hear. So I'll just have to spend an hour or so tomorrow night ensuring our private pull-through registry is used on everything prod and the biggest explosion is averted. Images built by the company land in internal registries already, so that's fine as well.
Means, it's mostly a question of, (a) checking for image squatting on the hub after orgs get deleted, which I don't know how to deal with just yet (could I just null-route the docker hub on my registry until evaluated and we just don't get new images?), and (b) ruffling through all of our container systems to see where people use what image to figure out which are verified, or paying, or obsoleted, and where they went, or what is going on. That'll be great fun.
The typical Docker registry software when configured as a 'pull through' doesn't allow for pushes, if memory serves. That may be an important consideration while handling the situation
We run them in 'maintenance mode' just to be absolutely sure anything the upstream doesn't have (or had at one point) is permitted in!
Though, I don't think they'll allow pushes anyway with 'proxy.remoteurl' defined.
I'm not sure I followed your setup properly, but with the private registry defined as your 'proxy.remoteurl', you shouldn't have to worry about the Hub in particular - unless it's looking there, or people are pushing bad things into it
> I'm not sure I followed your setup properly, but with the private registry defined as your 'proxy.remoteurl', you shouldn't have to worry about the Hub in particular - unless it's looking there, or people are pushing bad things into it
That is exactly the thing I am worried about, as we have a pull-through mirror for the docker hub.
What happens if some goofus container from that chaotic team pulls in knownOSS/component, but knownOSS got deleted and - after 30 days of available recon by _all_ malicious teams on the planet - got squatted instantly afterwards with rather vile malware? Spend some pennies to make a dollar by getting into a lot of systems.
Obviously, you can throw a million shoulds at me, shouldn't do that, should rename + vendor and such (though how would you validate the image you mirror.), but that's a messy thing to deal with and I am wondering about a centralized way to block it without needing anyone but the registry/mirror admins.
The problem here is that the company I work at has started building these golden images, full of cruft and then no team gets allocated to maintain them.
> Secondly, if this is a serious worry. I would recommend creating your own private docker registry.
I've personally been using Sonatype Nexus for a few years with no issues - both for caching external images, as well as hosting my own custom ones. It has pretty good permissions management and cleanup policies.
It's probably not for everyone, but only having to pay for the VPS (or host things on my homelab) feels both simpler and more cost effective in my case. I've also used it at work and there were very few issues across the years with it, mostly due to underestimating how much storage would be needed (e.g. going with 40 GB of storage for approx. 10 apps, each of which were in active development).
Secondly, if this is a serious worry. I would recommend creating your own private docker registry.
https://docs.docker.com/registry/deploying/
Then I would download all current versions of the images you use within your org and push them up to said registry.
It’s not a perfect solution, but you’ll be able to pull the images if they disappear and considering this will take only a few minutes to set up somewhere, could be a life saver.
As well, I should note that most cloud providers also have a container registry service you can use instead of this. We use the google one to back up vital images to in case Docker Hub were to have issues.
Is this a massive pain in the butt? Yup! But it sure beats failed deploys! Good luck out there!