Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

    ideally a coop of some variety
This is the role I feel like podman, the tool developed by Red Hat, is filling.


Podman is great and is first class citizen on Fedora. It also integrates nicely with SystemD. My only gripe with it is not many developers provide podman configuration on their install pages like they do with docker compose


Tangent: Why is the misspelling "SystemD" so common, when it has always been "systemd"? I would understand "Systemd" or "SYSTEMD" or something, but why specifically this weird spelling?


People not familiar with tacking on a lowercased ‘d’ to the name for daemons?


Probably to specifically call it out as "systemd" versus autocorrected misspelling of "systems".


Instinctively applying Pascal case, maybe?


I've always thought of it as in analogy to System V.


Nah, it's French.

> System D is a manner of responding to challenges that require one to have the ability to think quickly, to adapt, and to improvise when getting a job done.

> The term is a direct translation of French Système D. The letter D refers to any one of the French nouns débrouille, débrouillardise or démerde (French slang). The verbs se débrouiller and se démerder mean to make do, to manage, especially in an adverse situation. Basically, it refers to one's ability and need to be resourceful.

Source: https://en.wikipedia.org/wiki/System_D


Interestingly, https://www.freedesktop.org/wiki/Software/systemd/#spelling says...

> But then again, if [calling it systemd] appears too simple to you, call it (but never spell it!) System Five Hundred since D is the roman numeral for 500 (this also clarifies the relation to System V, right?).


I'm using docker-compose with a podman VM for development on a mac. Works ok so far. It wasn't quite slick enough when Docker pulled the licence switch last year, but the experience in the last couple of months has been pretty painless.


Fortunately you can use docker-compose with Podman these days.

(There have been a few false starts so I'm specifically referring to the vanilla unmodified docker-compose that makes Docker API calls to a UNIX socket which Podman can listen to).


This is more about Docker hub than Docker.

Image hosting is expensive at scale, and someone's got to pay for the compute/storage/network...


Docker Hub's the part I care about the most.

If I can't use it as a daemon-focused package manager that works more-or-less the same everywhere with minimal friction without having to learn or recall the particulars of whatever distro (hell, on my home server it even saves me from having to fuck with systemd) and with isolation so I can run a bunch of versions of anything, I'll probably just stop using it.

Everything else about it is secondary to its role as the de facto universal package manager for open source server software, from my perspective.

... of course, this is exactly the kind of thing they don't want, because it costs money without making any—but I do wonder if this'll bite them in the ass, long-term, from loss of mindshare. Maybe building in some kind of transparent bandwidth-sharing scheme (bittorrent/DHT or whatever) would have been a better move. I'd enable it on my server at home, at least, provided I could easily set some limits to keep it from going too nuts.


>> Image hosting is expensive at scale, and someone's got to pay for the compute/storage/network..

Bit Torrent would beg to differ.


That's a neat idea but probably unworkable in practice. Container images need to be reliably available quickly; there is no appetite for the uncertainties surrounding the average torrent download


> That's a neat idea but probably unworkable in practice. Container images need to be reliably available quickly; there is no appetite for the uncertainties surrounding the average torrent download

Bittorrent seems to work quite well for linux isos, which are about the same size as containers, for obvious reasons.

IMO, the big difference is that, with bittorrent, it's possible to very inexpensively add lots of semi-reliable bandwidth.


Nobody is going to accept worrying about whether the torrent has enough people seeding in the middle of a CI run. And your usual torrent download is an explicit action with an explicit client, how are people going to seed these images and why would they? And what about the long tail?


Nobody needs to be seeding if only one download is active. You could self host an image at home on a Raspberry Pi and provide an image in a minute.

Nobody's CI should be depending on an external download of that size.


We are talking about replacing the docker hub and the like, what people "should" be doing and what happens in the real world are substantially different. If this hypothetical replacement can't serve basic existing use cases it is dead at the starting line.


> enough people seeding

the .torrent file format, and clients, include explicit support for HTTP mirrors serving the same files that's distributed via P2P.


Archive.org does this with theirs. If there are no seeds (super common with their torrents—IDK, maybe a few popular files of theirs do have lots of seeds and that saves them a lot of bandwidth, but sometimes I wonder why they bother) then it'll basically do the same thing as downloading from their website. I've seen it called a "web seed". Only place I've seen use it, but evidently the functionality is there.


I'm pretty much convinced the people at Docker have explicitly made their "registry" not be just downloadable static files purely to enable the rent-seeking behavior we are seeing here...


Cache images locally. Docker has enough provisions for image mirrors and caches.

Downloading tens or hundreds of megabytes of exactly the same image, on every CI run, on someone else's expense, is expectedly unsustainable.


People who need "reliably available quickly" can pay or set up their own mirror. Everyone else can use the torrent system.


Not a bad idea. Have the users seed the cached images.


I agree. The core devs should create a new company and focus just on the tools, with a simple, scaling licence model for them.

As far as DockerHub goes, the OSS hosting costs do need to be solved, but surely they can be.


I'm not sure it's easy. We're seeing other open source projects like Kubernetes struggle with hosting costs, and that's just one project.

Ideally it'd be great to see the industry fund it, but with budget cuts in tech. I'm not sure that'll happen...


I haven't seen that, but I haven't been following along. I'd assumed they would be very Google-funded still. Is it a general CNCF problem?


So I'm not in the details of this but I understand from k8s slack that there is a fixed GCP budget for image hosting and Kubernetes is getting through it too quickly which is why they're moving the registry domain to a generic one from a GCP specific one, to allow for other funding to be found and used.


While that's true, for the amount of network traffic they're likely moving around, I wonder where they're placing their servers.

eg something like AWS with massive data transfer costs, vs something else like carefully placed dedicated/colocation servers at places which don't charge for bandwidth


If it's AWS, they've surely got a huge discount. No way they're paying 8+x normal big-fish CDN rates for transfer. At their scale, it would have easily been worth the effort to move to something cheaper than AWS long ago, or else to negotiate a far lower rate.


It is on S3.

    keeb@hancock > [/home/keeb] dig +short hub.docker.com
    elb-default.us-east-1.aws.dckr.io.
    prodextdefblue-1cc5ls33lft-b42d79a68e9f190c.elb.us-east-1.amazonaws.com.


> No way they're paying 8+x normal big-fish CDN rates for transfer.

While you're probably right, I've seen dumber things happen so I wouldn't completely rule out the possibility. :wink:


image hosting is not that expensive at scale. i can put an image on ECR and pay for bandwidth and storage at what is really not very good rates, and it still comes out way cheaper than paying for what docker hub wants me to pay.


How is a tool developed by, and strongly pushed by (to the point of strongarming customers to transition to their tool, features lacking be damned) a corporation, especially one owned by IBM, filling the role of a coop-developed tool?


Its not as easy nor as simple as docker + docker compose.


It’s literally OCI compatible, integrates with systemd and LSM, and runs rootless by default. Podman is 100000% better designed on the inside with the same interface on the outside.


Rootless networking is still a mess with no IP source propagation and much slower performance. So for most users docker with userNS-remapping is actually a better choice.

Also systemd integration isn't a plus for me, I don't want to deal with SystemD just to have a container start on startup.


I think --network=pasta: helps with source IP preservation.

Regardless that has never bothered me since I'm only using podman or docker for local development...


Hmmm, pasta seems to solve all rootless networking issues...

https://github.com/containers/podman/pull/16141


It’s the lack of fully compatible compose that matters most.


Podman appears to support the compose v2 spec, and the socket API, but still not fully supporting buildkit.

https://www.redhat.com/sysadmin/podman-compose-docker-compos...


You're right, it's both easier and simpler since no daemons are involved. podman-compose has the same command-line interface and has worked ok for me so far (maybe 3 or 4 years at this point).


Podman-compose isn't fully compatible with the new compose spec.

Also I really don't care if docker has a daemon or not, for me it offers feature like auto starting containers without bothering with SystemD, and auto updates using watchtower and the docker socket.

And since podman doesn't have an official distro package repo like docker, you are stuck use whatever old version shipped in your distro without recent improvements, which is important for a very active development project.


> Also I really don't care if docker has a daemon or not, for me it offers feature like auto starting containers without bothering with SystemD

Bingo, the "pain" of the daemon (it's never cause a single problem for me? Especially on Linux, on macOS I've occasionally had to go start it because it wasn't running, but BFD) saves me from having to touch systemd. Or, indeed, from caring WTF distro I'm running and which init system it uses at all.


To be fair, every mainstream distro now uses Systemd


> And since podman doesn't have an official repo like docker,

Hmm... https://github.com/containers/podman

I found that on: https://podman.io/ so, I'm pretty sure it's official.


I meant a a repo for a distro package manager, so you can get the latest version regardless of whatever version your distro ships.


The most of major distros ship podman in their repositories. Just use your package manager to install podman.


And these versions are often our of date, which is important given that podman is in active a development and you want to be using the latest version.


I don't understand what the issue is. Don't use an LTS distro if you want up to date software. Fedora and Arch are up to date for Podman. Alpine seems to be one minor version behind.


I want stability for the system and a newer podman version. I do this all the time with docker, install an LTS distro and then add the official docker repos.


podman + podman-compose is as easy.


Not comparable to the full compose spec.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: