Podman is great and is first class citizen on Fedora. It also integrates nicely with SystemD.
My only gripe with it is not many developers provide podman configuration on their install pages like they do with docker compose
Tangent: Why is the misspelling "SystemD" so common, when it has always been "systemd"? I would understand "Systemd" or "SYSTEMD" or something, but why specifically this weird spelling?
> System D is a manner of responding to challenges that require one to have the ability to think quickly, to adapt, and to improvise when getting a job done.
> The term is a direct translation of French Système D. The letter D refers to any one of the French nouns débrouille, débrouillardise or démerde (French slang). The verbs se débrouiller and se démerder mean to make do, to manage, especially in an adverse situation. Basically, it refers to one's ability and need to be resourceful.
> But then again, if [calling it systemd] appears too simple to you, call it (but never spell it!) System Five Hundred since D is the roman numeral for 500 (this also clarifies the relation to System V, right?).
I'm using docker-compose with a podman VM for development on a mac. Works ok so far. It wasn't quite slick enough when Docker pulled the licence switch last year, but the experience in the last couple of months has been pretty painless.
Fortunately you can use docker-compose with Podman these days.
(There have been a few false starts so I'm specifically referring to the vanilla unmodified docker-compose that makes Docker API calls to a UNIX socket which Podman can listen to).
If I can't use it as a daemon-focused package manager that works more-or-less the same everywhere with minimal friction without having to learn or recall the particulars of whatever distro (hell, on my home server it even saves me from having to fuck with systemd) and with isolation so I can run a bunch of versions of anything, I'll probably just stop using it.
Everything else about it is secondary to its role as the de facto universal package manager for open source server software, from my perspective.
... of course, this is exactly the kind of thing they don't want, because it costs money without making any—but I do wonder if this'll bite them in the ass, long-term, from loss of mindshare. Maybe building in some kind of transparent bandwidth-sharing scheme (bittorrent/DHT or whatever) would have been a better move. I'd enable it on my server at home, at least, provided I could easily set some limits to keep it from going too nuts.
That's a neat idea but probably unworkable in practice. Container images need to be reliably available quickly; there is no appetite for the uncertainties surrounding the average torrent download
> That's a neat idea but probably unworkable in practice. Container images need to be reliably available quickly; there is no appetite for the uncertainties surrounding the average torrent download
Bittorrent seems to work quite well for linux isos, which are about the same size as containers, for obvious reasons.
IMO, the big difference is that, with bittorrent, it's possible to very inexpensively add lots of semi-reliable bandwidth.
Nobody is going to accept worrying about whether the torrent has enough people seeding in the middle of a CI run. And your usual torrent download is an explicit action with an explicit client, how are people going to seed these images and why would they? And what about the long tail?
We are talking about replacing the docker hub and the like, what people "should" be doing and what happens in the real world are substantially different. If this hypothetical replacement can't serve basic existing use cases it is dead at the starting line.
Archive.org does this with theirs. If there are no seeds (super common with their torrents—IDK, maybe a few popular files of theirs do have lots of seeds and that saves them a lot of bandwidth, but sometimes I wonder why they bother) then it'll basically do the same thing as downloading from their website. I've seen it called a "web seed". Only place I've seen use it, but evidently the functionality is there.
I'm pretty much convinced the people at Docker have explicitly made their "registry" not be just downloadable static files purely to enable the rent-seeking behavior we are seeing here...
So I'm not in the details of this but I understand from k8s slack that there is a fixed GCP budget for image hosting and Kubernetes is getting through it too quickly which is why they're moving the registry domain to a generic one from a GCP specific one, to allow for other funding to be found and used.
While that's true, for the amount of network traffic they're likely moving around, I wonder where they're placing their servers.
eg something like AWS with massive data transfer costs, vs something else like carefully placed dedicated/colocation servers at places which don't charge for bandwidth
If it's AWS, they've surely got a huge discount. No way they're paying 8+x normal big-fish CDN rates for transfer. At their scale, it would have easily been worth the effort to move to something cheaper than AWS long ago, or else to negotiate a far lower rate.
image hosting is not that expensive at scale. i can put an image on ECR and pay for bandwidth and storage at what is really not very good rates, and it still comes out way cheaper than paying for what docker hub wants me to pay.
How is a tool developed by, and strongly pushed by (to the point of strongarming customers to transition to their tool, features lacking be damned) a corporation, especially one owned by IBM, filling the role of a coop-developed tool?
It’s literally OCI compatible, integrates with systemd and LSM, and runs rootless by default. Podman is 100000% better designed on the inside with the same interface on the outside.
Rootless networking is still a mess with no IP source propagation and much slower performance.
So for most users docker with userNS-remapping is actually a better choice.
Also systemd integration isn't a plus for me, I don't want to deal with SystemD just to have a container start on startup.
You're right, it's both easier and simpler since no daemons are involved. podman-compose has the same command-line interface and has worked ok for me so far (maybe 3 or 4 years at this point).
Podman-compose isn't fully compatible with the new compose spec.
Also I really don't care if docker has a daemon or not, for me it offers feature like auto starting containers without bothering with SystemD, and auto updates using watchtower and the docker socket.
And since podman doesn't have an official distro package repo like docker, you are stuck use whatever old version shipped in your distro without recent improvements, which is important for a very active development project.
> Also I really don't care if docker has a daemon or not, for me it offers feature like auto starting containers without bothering with SystemD
Bingo, the "pain" of the daemon (it's never cause a single problem for me? Especially on Linux, on macOS I've occasionally had to go start it because it wasn't running, but BFD) saves me from having to touch systemd. Or, indeed, from caring WTF distro I'm running and which init system it uses at all.
I don't understand what the issue is. Don't use an LTS distro if you want up to date software. Fedora and Arch are up to date for Podman. Alpine seems to be one minor version behind.
I want stability for the system and a newer podman version.
I do this all the time with docker, install an LTS distro and then add the official docker repos.