Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Docker should have been a neat tool made by one enthusiast, just like curl is.

Instead it has a multi-million dollar company behind it, and VC's who demand profits from a thing that shouldn't have ever had a business plan.



> Docker should have been a neat tool made by one enthusiast, just like curl is.

I have nothing but mad respect for Daniel Stenberg. 25 years of development of great software, for which he had been threatened[1] and had ridiculous US travel visa obtaining issues[2].

[1] https://daniel.haxx.se/blog/2021/02/19/i-will-slaughter-you/

[1] https://news.ycombinator.com/item?id=26192025

[2] https://daniel.haxx.se/blog/2020/11/09/a-us-visa-in-937-days...


There are lots of high functioning but harmless crazy people out there . I used to work for a government job and I found one of the most common tells was exactly what this "slaughter" person did. They love to list dozens of agencies to you for no reason. They have no authority so they hope they can borrow it from your fear of a random place. I cannot tell you how many emails/calls I have had left that fit this pattern, dozens at least.

>I have talked to now: FBI FBI Regional, VA, VA OIG, FCC, SEC, NSA, DOH, GSA, DOI, CIA, CFPB, HUD, MS, Convercent

Bonus Tell: They also love to say they are a doctor or PHD of something or often say PHD in multiple subjects.


I remember someone abusing a ticketing system I had to work with for reporting technical issues with a vast computer network, raising a ticket with an attachment from some absolute nutcase in multicolored manyunderlined .RTF format which was like as you described as "hate mail" in the subject line and the ticket being closed as "not hate mail", still makes me chuckle every time I think about that


> [1] https://daniel.haxx.se/blog/2021/02/19/i-will-slaughter-you/

Wow that's clearly someone with serious mental issues :( I hope he could find some help for his condition.


If you have your name all over the place, I guess that it bound to happen eventually. Curl is used by millions of people which makes Daniel Stenberg kind of a celebrity. With so many users, there have to be some crazies like the "I will slaughter you" guy.

It must be a common occurrence among famous software people, I wonder how they deal with that. Do they actively hide their real identity, for example by using a proxy for licensing, do they just ignore such madness, is it a burden or on the opposite, they enjoy their fame?


Maybe it's a good thing that the guy affected hasn't been awarded the defense contract as a result.


People suffering from psychosis can create "facts" supporting their ideas and believe in them. Usually it's the stuff like "someone follows me", "someone wants to hurt me". Psychosis is the entry point to schizophrenia which is more or less, an illness in which brain makes stuff up and the ill person cannot differentiate facts from hallucinations.

Possibly there was no defense contract at all.


It's not just people suffering from psychosis who do that.

"29% believe aliens exist and 21% believe a UFO crashed at Roswell in 1947. [...] 5% of respondents believe that Paul McCartney died and was secretly replaced in the Beatles in 1966, and just 4% believe shape-shifting reptilian people control our world by taking on human form and gaining power. 7% of voters think the moon landing was fake." -- https://www.publicpolicypolling.com/wp-content/uploads/2017/...

"Belief in both ghosts and U.F.O's has increased slightly since October 2007, by two and five percentage points, respectively. Men are more likely than women to believe in U.F.Os (43% men, 35% women), while women are more likely to believe in ghosts (41% women, 32% men) and spells or witchcraft (26% women, 15% men)." -- https://www.ipsos.com/en-us/news-polls/belief-in-ghosts-2021

"A new Associated Press-GfK poll shows that 77 percent of adults believe [angels] are real. [...] belief in angels is fairly widespread even among the less religious. A majority of non-Christians think angels exist, as do more than 4 in 10 of those who never attend religious services." -- https://www.cbsnews.com/news/poll-nearly-8-in-10-americans-b...


The other day some one mentioned any of these surveys consistently have about a 5% troll rate.

The 77% belief in angels is bizarre though. Like I believe in the possibility of aliens, the universe is quite large. Although I think all spacecraft sightings are almost certainly just mundane stuff from spy planes to weather balloons, etc. I even believe in the possibility of ghosts being real, more likely some strange phenomenon we can't explain that we might misidentify as ghosts. But angels?

One man's angel is another man's ghost or alien though I guess.


if you buy into god, angels are on par with aliens, possibly even more probably present.


Indeed. According to [1], it would appear 58% of the US officially believes in angels by creed (Protestant, Catholic, Mormon, Orthodox, Jewish, or Muslim). Only 11% are atheist or agnostic, and there's a 30% group that's religious but "unaffiliated" or "other". I totally buy that two thirds of "religious but unaffiliated or other" would believe in angels.

The difference here is that to the religious mind, angels are credible in a way that UFOs, ghosts, and magic are not. (The irreligious mind probably finds them all equally credible, hence the disconnect.)

Put another way, it would not surprise me that someone who was "religious but not affiliated" might have a high regard for the Bible. Angels figure prominently in the Bible, and hence fall in that bucket.

[1] https://en.wikipedia.org/wiki/Irreligion_in_the_United_State...


I would be very interested to see a citation on the troll rate.


Lizardman’s Constant by Scott Alexander:

https://slatestarcodex.com/2013/04/12/noisy-poll-results-and...


That's not a citation. That's a guy making things up.


I'm surprised aliens is the low one here. The exact question is "Do you believe aliens exist, or not?", not something more specific like little green men in flying saucers abducting cows.

The universe is large. In the tiny slice we can observe well enough to draw conclusions, Wikipedia currently lists 62 "potentially habitable exoplanets". I'd be much more surprised by intelligent life being unique to Earth than by there being many planets harboring intelligent life, or to answer the question as asked: I believe aliens exist.

https://en.wikipedia.org/wiki/List_of_potentially_habitable_...


"Belief" or stated belief to an anon survey?


If we go that way, a lot more believe god exists!


That sounds very much how chatgpt acts.


Is it? GPT just halucinates the next words in a given text.


Of course it is. What in the parent's post is different from that? The parent post's first sentence is, "People suffering from psychosis can create 'facts' supporting their ideas and believe in them."


I don't think GPT believes something.



Yea schizophrenia is no joke. Even the follow up apology makes it clear he hasn’t recovered.


The Terry A. Davis reference was bemusing.


In the pdf there's mention to terry davis so I'm tempted to think this is actually a bit of a troll.


That PDF links to https://web.archive.org/web/20210223111850/https://www.nerve.... Would be quite the troll to go to the effort of buying a domain just to mess with an open source author.


You're probably right - I assumed the name terry davis being embedded in an email following a schizophrenic rant about software was a ruse.


I think it was genuine admiration, at least that is how I took it.


Replying to child I can't reply to?

There was a period where the US was treating public key encryption like arms exports, and involved in spreading the technology outside the US as tools were in us.govs sht list


https://en.wikipedia.org/wiki/Phil_Zimmermann

After a report from RSA Security, who were in a licensing dispute with regard to the use of the RSA algorithm in PGP, the United States Customs Service started a criminal investigation of Zimmermann, for allegedly violating the Arms Export Control Act.[5] The United States Government had long regarded cryptographic software as a munition, and thus subject to arms trafficking export controls. At that time, PGP was considered to be impermissible ("high-strength") for export from the United States. The maximum strength allowed for legal export has since been raised and now allows PGP to be exported. The investigation lasted three years, but was finally dropped without filing charges after MIT Press published the source code of PGP

They tried to ruin the man.


Because he was competing with a private military contractor, and the US government is a wholly owned subsidiary of the MIC: or often acts like it is. Customs should have told RSA "no", "this is a private contract dispute", "hire a lawyer and file suit". Of course it was much more than that. Zimmerman put real privacy protecting encryption in the hands of the public, and the Many Eyes (that included state allies and adversaries) couldn't have that. But they needn't have worried: decades on the public is still ignorant about encryption, except as a marketing term, and most have no idea what a key pair is or what to do with it. Fraud around unauthorized access to government and commercial accounts is rampant (you _have_ set up and secured your online identity on your government's social security and revenue collection sites, haven't you?). That could have been prevented by early adoption and distribution of key pairs, alongside a serious public education campaign. Problem is, that would be at cross purposes with the goal of keeping the public uneducatable. Better for them to while away their time watching cable TV or delving into the latest conspiracy theory (pro or con).


I consider him equally important to people like Tim Berners-Lee for building the foundation of the web.


I read the travel issues post you linked, but am not seeing the causal link you’re drawing between development of software and visa issues. Was there more to the story?


I may have remembered incorrectly, which post was it. Here[1], in the paragraph titled "Why they deny me?" (unlinkable), Daniel hints at the possibility that this may have been due to development of (lib)curl which is used for malware creation by 3rd parties. There was no proof though.

[1] https://daniel.haxx.se/blog/2018/07/28/administrative-purgat...


The most superficial (and likely) reason to me seems to be that he uses haxx.se. I really wonder what kind of investigation they do. If they just start with Google, this one might come up immediately.


Ah, that makes sense. I have no dog in the fight and am far from the emotion of having a visa delayed in this circumstance. I would say that it was much more likely to be some level of incompetence than malice, having dealt with large government bureaucracies myself.


> ridiculous US travel visa obtaining issues

Ridiculous? This is pretty common issue for anyone who travels to US. Visa may be denied for whatever reason and tough luck on appeal. I am EU citizen and had similar experience just for visiting Iran on tourist trip. Do not even ask about guys from India, Pakistan or less fortunate countries.

And it got even worse with pandemic. US required vaccination for very long time, long long after it was relevant. Maybe they still do, frankly I do not care to look at this point!

I think biggest WTF here is why international organization like Mozilla is organizing company wide meetup in US, and not in country with liberal visa entry policy such as Mexico!


I applied for US travel visa as a citizen of Poland in 2012 and was denied travel due to "wrong type of visa". I was planning to visit my employer and spend 1-2 weeks traveling across the country. Apparently both business and travel visas were inappropriate for these purposes. To add, I was questioned in a US consulate/embassy (can't remember) in Warsaw by a person who repeatedly refused to speak in English, insisted on Polish and I, as a native Polish speaker, had issues understanding them. Poor experience.

This was not a case for Swedish citizens, which is mentioned at the beginning of Daniel's linked post. Sweden is a member of ESTA[1] and Daniel traveled to the US multiple times before being denied travel (with still valid ESTA) and only then applied for a visa.

[1] https://esta.cbp.dhs.gov/


I believe that B1/B2 should work just fine for these purposes.

Probably you answered an officer (or airline worker) that you were gonna "work" there, not just visit your employer for an event?


Absolutely not. I had, and still have, my own small business in Poland and I was clear (in writing) that I am planning to visit my main client.


You mentioned both employer and client, are they the same?


Yeah. I treat one-man small business serving mainly one big client to be comparable. On paper it's B2B, in reality it's working for the client and if the client is small business' main source of income, it's pretty much an employment.

Differences, in Poland at least, are that small business owner in this scenario is not protected by employment laws (3-months notice layoff, max 3 months salary-equal damages liability etc) and uses company's (EU)VAT registration number instead of personal social security number equivalent (PESEL number). It eases abroad contract agreements, invoicing and allows serving more clients easily. Company existence can also be validated on EU VIES[1] website quickly.

In the visa case, I have of course used the "paper" phrasing as in reality I was, and am, only employed by my own small business.

[1] https://ec.europa.eu/taxation_customs/vies/#/vat-validation


Ridiculous does not mean uncommon. Situations can be both common and ridiculous (absurd).

I'm American, but I have enough friends and family from other countries (my wife is an Iranian passport holder) to know what you're talking about and how difficult it can be.


While it’s true that US visa applications can be difficult, the same is true for any first-world country.

I was born in a third-world country, and ended up getting tourist visas to the EU, US, and Canada. US was by far the easiest - for me anyway.

If you want a large global meeting in a safe country (I would never in my life go to Mexico) there will be visa issues.


It seems like another symptom of zirp/cheap money.

Lots of ideas that could have been a neat feature or tool somehow ended up raising $500M of funding with no viable plan on ever monetizing.

The fact that the product is successful but after a decade they barely make $50M/year of revenue against $500M of lifetime funding is crazy. As a user, you can work at a company with a billion in revenue and barely owe them a few thousand/year. Or you might just use Podman for free, and prefer it due some of the design differences.

At the very least, a lot of these firms, with VC pressure, overstayed their welcome as private enterprises and should have sold themselves to a larger firm.


Some time ago I learned that Postman Labs that produces a nice but not-a-rocket-science HTTP client raised $433M at multi-billion valuation and has 500 employees. Isn't it astonishing?


Postman's strength is not in the HTTP client part. It is in the SaaS part, ad I think their valuation (even though overblown) mostly reflects their corporate penetration and the willingness of many companies to pay a small amount for their services.


The SaaS part being the offering for creating developer.acme.com type pages?


No.

Centralizing and sharing your API descriptions, test suites and plans, the various ad-hoc queries people usually keep in their notes or on Slack (and lose), handling involved auth stuff which is a hassle with curl, etc.

I think they gravitate towards the same area as swagger.io or stoplight.io, but from the direction of using the existing APIs.


API schemas and test suites are usually stored as code in some sort of SCM. I googled "postman maven" and "postman gradle" and found nothing official so I guess they have nothing except stand-alone workspaces.

API registry is a useful tool with modern love for nanoservices when a team of five somehow manages ten of those but I don't see anything similar done by Postman. Two of the service registries I know of were implemented in-house for obvious reasons.


Do you also mean things like Uber? with double digit $B lost with no road to be profitable? I agree.


And Lyft... and Doordash... and GrubHub...

Pretty much the entire "gig economy" is full of hot air and survives on regular influxes of VC money despite massive losses every year. The business model doesn't frickin work.

The hope from investors was that they would be investing into what would ultimately become a monopoly that could extract rents to repay them (not very competitive market of them, but that's tipping the hat a little isn't it...) but the funny bit is there's like 5-7 competitors in the US alone doing the same thing.

Here's a take: maybe this is just a natural monopoly situation, and if we like the convenience of gig delivery but don't like the high prices per order or that gig workers don't get sufficient pay, health insurance or other benefits, how about we just nationalize it?

You know, the same way we did for everything that wasn't food or groceries before? USPS Courier service sounds like an idea to me.


Nationalize it? No way. Besides I like rich investors ponying up money so my ride/food is more convenient and cheaper! It won’t last, but then what does?


I think that Docker can have a viable business plan but they had terrible execution. At my previous position, I wanted to use DockerHub more heavily but the entire experience was like a bootstrap project someone did as a university assignment. Many advanced features for organizations were lacking (SSO/SAML) that we would have happily paid for.


That, plus not being willing to accept Purchase Orders, doomed them with my employer.

It's as if they had no idea how things work at large enterprises that are older than most Docker employees.


Indeed. Docker should've been plumbing. They could've had a really nice business with developer tools on top of the core bits, but they decided to try to jump straight to enterprise and did a number of things to alienate partners and their broader community.

Instead of adding value to Docker they're just trying to find the right knobs to twist to force people into paying. And I think people should pay for value when they're using Docker substantially for business. But it seems like a very short-minded play for cash disregarding their long-term relationship with users and customers.

All that said: They have to find revenue to continue development of all the things people do like. I'd encourage people to ask if the things they've gotten for free do in fact have value, and if that's the case, maybe disregard the ham-fistedness and pony up if possible.


Yeah! I should be able to get 50x value from software and not pay for it /s

The open source community that carrier Docker on its back and is now bending over. Let this be a lesson to you. If you're building open source, maybe stick to open source solutions in your tech stack and if it's not there build it. This is what Apache does for the Java ecosystem.

I don't have sympathy, the writing was on the wall and this isn't the first time it's happened to the community.


> If you're building open source, maybe stick to open source solutions in your tech stack and if it's not there build it. This is what Apache does for the Java ecosystem.

You mean this Apache: https://github.com/apache ?


In all fairness, curl is purely a software tool. Docker is arguably more like a service. As such, it creates costs for and direct dependency on the entity behind it.


Docker is a software tool. Docker Hub is a service. If Docker didn't stand up Docker Hub the equivalent services from GitHub, Google et al would have competed on a more even playing field.


It's almost like they created intentional ambiguity here when they renamed the company (dotCloud) to match the name of the open source tool, then renamed the open source project behind the tool to something else (Moby), but kept it for the command line tool, while also combining the name Docker into their product offerings, including Engine and Deskop, that handle completely different parts of managing containers. That's not even including registries, dockerfiles, Compose, Swarm, etc. and the ambiguity around where those sit in the a Venn diagram.

That's some Google-level naming strategy there.


Lots of orgs figure out how to piggy back the “service” part of whatever they’re doing on free or sponsored infrastructure, though. Homebrew, for example, has been doing a lot of the same stuff on Travis and GH Actions since forever.


I think they potentially could have made a decent business out of it but they made a lot of bad business decisions.

I find myself shaking my head at a lot of their technical decisions too.

Podman seems to me to be a case study for how to do this right.


Podman is interesting. I like the architecture problems it solves with respect to Docker but the way they went about it was typical big business Red Hat. Dan Walsh, Podman's BDFL it seems, basically stood in front of RHEL / OpenShift customers for years bashing Docker even when a majority of the things he was claiming were less than half baked. RHEL made sly moves like not supporting the Docker runtime, even at a time when it put their customers in an awkward spot before containerd won the k8s runtime war. Podman is backed by much larger corporate machinery. If anyone thinks that Podman "winning" is a good thing then you've played right in to Walsh's antics. RHEL wants nothing more than to have no friction when selling all the "open source" tooling you may need in your enterprise.

Podman wasn't built out of necessity but out of fiscal competitive maneuvering. And it's working. I see so many articles on the "risks" of Docker vs Podman. The root wars are all over the place. Yet... The topic is blown way out of proportion by RHEL for a reason: FUD all in the name of sales. Is there merit to the claim? For sure. Docker's architecture was originally built up as client/server for a different purpose. That didn't play out and the architecture ended up being a side effect of that. But we don't see container escape nearly as much as Red Hat would like us to believe. I keep paying Docker because I don't want to live in Red Hat's world, with their tooling that they can just lock out of other platforms once they feel like it. No thanks.


Podman winning is good. Red Hat consistently does things right, for example their quay.io is open source, unlike Docker Hub and GitHub Container Registry. The risks of not using rootless containers weren’t blown way out of proportion, because rootless containers really are much more secure. Not requiring a daemon, supporting cgroup v2 early, supporting rootless containers well and having them as the default, these are all good engineering decisions with big benefits for the users. In this and many other things, Red Hat eventually wins because they are more open-source friendly and because they hire better developers who make better engineering decisions.


> In this and many other things, Red Hat eventually wins because they are more open-source friendly and because they hire better developers who make better engineering decisions.

We must be talking about a different Red Hat here. Podman, with breaking changes in every version, that is supposedly feature and CLI complete with Docker, but isn't actually, is winning because it's more open source friendly or better technically? Or systemd, written in a memory unsafe language (yes, that is a problem for something so critical and was already exploited at least a couple of times), using a weird special format for it's configuration, where the lead dev insults people and refuses to backport patches (no, updating systemd isn't a good idea) won "because it was more open source friendly"? Or OpenShift that tries to supplant Kubernetes stuff with Red Hat specific stuff that doesn't work in some cases (e.g. TCP IngressRoutes lack many features), is winning "because it was more open source friendly"?

No, Red Hat are just good at marketing, are an established name, and know how to push their products/projects well, even if they're not good or even ready (Podman is barely ready but has been pushed for years by this point).


>Or systemd, written in a memory unsafe language (yes, that is a problem for something so critical and was already exploited at least a couple of times)

What memory safe language 1) existed in 2010 and 2) is thoroughly portable to every architecture people commonly run Linux on and 3) is suitable for software as low-level as the init?

Rust is an option now but it wasn't back then. And Rust is being evaluated now, even though it's not quite ready yet on #2.


Go, although with it's GC it's debatable to what extent it's suitable for very low level software.

And honestly the language choice was only the tip of the iceberg, it took years of people adapting before systemd became usable. And it still doesn't handle circular dependencies better than arbitrarily which is ridiculous, literally one of it's main jobs is to handle dependencies.


There's Ada.


Ada has no ecosystem, and a lot of the ecosystem that does exist is proprietary, and it brings us back to point #2.


> Ada has no ecosystem, and a lot of the ecosystem that does exist is proprietary,

Not no ecosystem, but yes it's way smaller... probably even smaller than Rust, yes.

> and it brings us back to point #2.

I seriously doubt it. Ada is supported directly in gcc; why would it have any worse platform coverage than anything else?


it would be fun if we could simulate the world where systemd was written in Ada and then read all the comments/criticism


I first found Podman when looking for alternatives when Docker broke on my laptop in the midst of all the Docker Desktop licensing changes. Frankly, I use it because it has been more stable lately, not because of any long run marketing campaign from Red Hat. I suspect a lot of its userbase will be in a similar place as the experience with Docker continues to degrade.


OTOH, Docker didn't want to support a lot of features that enterprise customers wanted, like self-hosted private registries, because they wanted people using Dockerhub.

And wasn't the runtime problems because Docker was very very late to adopting CGroups v2?


Yes cgroupsv2 was a big problem for docker on EL8 for a long time.


Yes exactly. GP is misinformed on history. Red hat didn't sabotage anything. Docker took forever to update to cgroups V2, and that broke it for distros like fedora that are up to date. The user had to downgrade their kernel in order to use docker, but if they did everything else worked fine.


While you have a valid point with cgroups I never stated anything about "sabotage". So let's not play the misinformed card and then go on making things up.

As for Red Hat and their games of not supporting Docker, even after cgroups were addressed Red Hat never officially supported Docker as a runtime. How do I know this? Because at the time I was working with paying clients of RHEL/OpenShift and was on calls regarding said customers being forced to use inferior (their words) RHEL tooling. So while your history may have not seen the games Red Hat was playing, they surely were.


You may have a healthy dislike for the corporate behemoth that is RH / IBM, but, to my mind, Docker, Inc is worse: they keep more things closed, and they literally pressure for money.

I mean, I wish guys like FSF would have produced a viable Docker alternative, but this hasn't happened, at least yet.


>I don't want to live in Red Hat's world, with their tooling that they can just lock out of other platforms once they feel like it

Explain please. This sounds like you're accusing RH of sabotaging Docker, or planning to. That's a very serious accusation requiring proof.


I'm not sure why it's so hard for anyone to find this on their own. OpenShift forced users to use CRI-O and RHEL removed Docker as part of the Yum repository.

Plenty of references to this: https://crunchtools.com/docker-support/

Even though, at the time, CRI-O was a much worse option. Yes, Red Hat plays competitive lockout games all day long. This is just a singular example.


Some of it also sounds a bit like leftover angst from Red Hat winning the systemd war too.

Turns out hanging out in someone else’s cathedral can have some pretty big benefits.


RedHat has not won any systemd war. From all the distributions out there using systemd, RedHat is the one that uses the least amount of systemd features. They are even going so far as disabling features.

See * https://bugzilla.redhat.com/show_bug.cgi?id=1962257 * https://gitlab.com/redhat/centos-stream/rpms/systemd/-/blob/...

Sometimes they even backport systemd features from more recent versions, disable them but leave man pages in the original state. Even the /usr split isn't progressing at all.

Meanwhile Fedora has implemented all these changes, which according to https://www.redhat.com/en/topics/linux/what-is-centos-stream, should be the upstream for CentOS.

I would say RedHat dropped the ball on systemd and has no intention of supporting any of the new features in any of their systems.


I too find Red Hat's poor documentation hygiene a pain in the arse. But as for the disabled system features, I think that they all fall into the category of experimental/unproven sort of features that overlap with other existing RHEL components. Every enabled feature has a cost in the form of support burden.


Those are not "systemd features", they are components within the systemd suite. Using systemd-init has never required that you use every component within the systemd suite (e.g. ntp, network management, etc.)


>Podman is backed by much larger corporate machinery. If anyone thinks that Podman "winning" is a good thing then you've played right in to Walsh's antics.

I'm not making a moral judgement. I'm just saying that docker had serious technical problems and docker the business sucked at monetizing it.

Docker played into red hat's tactics. I've never heard of Matt Walsh and frankly, I've wanted rootless containers for years before I ever heard of podman.

>Podman wasn't built out of necessity but out of fiscal competitive maneuvering.

Becuase red hat is a business not a charity.

I doubt they would have built a better docker if docker wasn't refusing to improve.


I am usually an early adopter but I keep coming back to Docker since Podman is still very rough around the edges, especially in terms of "non-direct" usage (aka other tools)


As someone who's been bitten by this, I'm not sure if it's an issue with podman itself as much as the tools which expect docker. It could be argued that podman is not a docker drop-in replacement, but I expect more and more tools to pick it up.


> It could be argued that podman is not a docker drop-in replacement

This is an unfortunate part IMHO. podman is not a docker drop-in replacement, but it is advertised as such.


Besides the advertising, it's very close to being a drop-in replacement but their pace isn't closing that gap quick enough (or maybe they don't want to, or it isn't possible, idk I'm just a user) so you get a false sense of confidence doing the simple stuff before you run into a parity problem.


Worth remembering is that Docker supports Windows containers. That’s a hard requirement for many enterprises.


Is this a matter of developers constantly relearning the lesson of the folly of only supporting the current top thing, or is it a lot harder to support more than one?


I don't know how "hard" it is, but in my particular case I wanted to use this from intellij. It actually works, but the issue is that the docker emulation socket isn't where the ide expects it, and I haven't found a way to tell it where to look for.

Once I simlinked the socket, everything worked.


This worked for me:

Connect to Docker Daemon with -> TCP Socket -> Engine API URL -> unix:///run/user/$UID/podman/podman.sock


The devil is in the details. For example, docker has a DNS service at 127.0.0.11 it injects into /etc/resolv.conf, while podman injects domain names of your network into /etc/hosts. Nginx requires a real DNS service for its `resolver` configuration.


Docker was created by dotCloud for a different purpose than it ended up as. I think they are owed credit for what has become an incredibly elegant solution to many problems, and how great the user experience has always been.

Compare it to other corporate-managed tools like Terraform and Ansible. Both of them have horrible UX and really bad design decisions. Both make me hate doing my job, yet you can't not use them because they're so popular your company will standardize on them anyway. Docker, on the other hand, is a relative joy to use. It remains simple, intuitive, effective, and full of features, yet never seems to suffer from bloat. It just works well, on all platforms. There were a few years of pain on different platforms, but now it's rock solid.

And to be fair to them, their Moby project is pretty solidly open-source, and if Docker Inc dies, the project will continue.


> Instead it has a multi-million dollar company behind it, and VC's who demand profits from a thing that shouldn't have ever had a business plan.

But you don't have to host curl? Who's gonna put the money to host all the images and bandwidths that ten of thousands of companies use but never pay?


It could have been designed with a self-host option or a torrent/ipfs backend for near-zero hosting costs and still be 'just works' for the user.


pretty sure you haven't used ipfs before.

for users to download resources from ifps you either need to install the client (which quite resource intensive) or use the gateways (which is just a server and cost money to run).

also the speed and reliability are nowhere good enough for serious works.


Even dumber, it should have just been pointers to encrypted files hosted on any arbitrary web server.


THIS!

Alternatives:

- Virtual registry that builds and caches image chains on-demand, locally

- Maybe a free protocol like Bit Torrent to store and transfer the images


Yeah, but curl is used to access and download all sorts of data, which are all hosted by multi-million dollar companies. Just like git downloads and uploads data to git repositories. curl and git are valuable, but so is GitHub, and websites in general. The problem is that they haven't found a way to monetize docker hub.


The VCs offered free bandwidth and storage to gain market share.

Bandwidth and storage is not ultimately not free, it has to be paid for.


FWIW, Docker was not intended originally to be a tool for commercialization; it grew out of dotCloud, which open sourced the tool as a last-ditch-effort of sorts, if memory serves.


Yes, even when it was launched was obvious because used they packaged and configured existing solutions. It was like a company behind 'ls' (irony).


Coming trough: https://github.com/ihucos/plash 90% done and useful


what do you mean "demand profit"?

Last time I check rent is not free, food is not free, bus ticket is not free. No reason why software should be free.

Open source was invented by big co as a "marginalized your complement" strategy, not the ideal that is marketed as. As an evidence, I do not see any cloud vendor open source their code?


> Open source was invented by big co as a "marginalized your complement" strategy, not the ideal that is marketed as.

> In 1983, Richard Stallman launched the GNU Project to write a complete operating system free from constraints on use of its source code. Particular incidents that motivated this include a case where an annoying printer couldn't be fixed because the source code was withheld from users.

from https://en.m.wikipedia.org/wiki/History_of_free_and_open-sou...

> Last time I check rent is not free, food is not free, bus ticket is not free. No reason why software should be free.

You are welcome to sell your software. You are welcome to be replaced if you can't compete. You don't have to sell your software and we don't have to buy it. You can and will be competed with.

Trying to build a multimillion dollar venture off a UI - even a good UI - is probably unwise. It does not seem to be going well for Docker who has gone from no competitors to multiple and all of those competitors are open source.


From your very link, 1983's GNU Project was not the first piece of Open Source software.

From your link: The first example of free and open-source software is believed to be the A-2 system, developed at the UNIVAC division of Remington Rand in 1953


> Software was not considered copyrightable before the 1974 US Commission on New Technological Uses of Copyrighted Works (CONTU) decided that "computer programs, to the extent that they embody an author's original creation, are proper subject matter of copyright"

FOSS before 1974 looks.. funny. It existed! But it did not look like the modern FOSS movement.

Even post 1974 and pre-GNU, FOSS-ish text editors and such existed. This was still the era when licenses were often non-standard and frequently did not exist. Handing your friend a copy of a program was the norm, regardless the actual legal situation (which itself was probably vague and unspecified).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: