Hacker Newsnew | past | comments | ask | show | jobs | submit | jrudolph's commentslogin

I wouldn’t be surprised if Google aims to replace Google Deployment Manager with this, which never really got any traction. Azure has ARM, AWA has Cloudformation. Google betting big on terraform will be interesting considering the recent change to BSL.

If Google decides to back OpenTF that would be a huge deal.


Getting a lot of docker hub vibes from this one. HashiCorp is course within their rights. Can't be cheap to run the registry given the obscene size of some terraform providers.

$ ls -lah terraform/providers/registry.terraform.io/hashicorp/aws/5.14.0/darwin_amd64/ total 368M

Anyone have an idea of the reasons terraform needs a 370 MiB binary just to call REST APIs?


> Anyone have an idea of the reasons terraform needs a 370 MiB binary just to call REST APIs?

That's because Terraform fell for the Go trap. When space and bandwidth are cheap, why not go for an environment that only ships fully self contained binaries? Oh, and why not go for a language that attracts hipsters like fruits attract flies, but is a nightmare to develop in?

Bloody ridiculous, it's a miracle Terraform got as far as it did.

(Yes, I'm working with Terraform every day and it's pretty decent but I'd love to extend it for Atlassian Cloud stuff without having to add a sixth language to my already sized toolbelt. Why Atlassian doesn't offer Terraform integration on their own is beyond me in any case)


> Can't be cheap to run the registry given the obscene size of some terraform providers.

Some providers are also hosted externally. I guess if traffic is going to be a problem they might also just switch to hosting every provider that is build on GitHub to GitHub releases (and hope that GitHub won't change its policy)

  curl -s https://registry.terraform.io/v1/providers/carlpett/sops/0.7.2/download/linux/amd64 | jq .download_url
  "https://github.com/carlpett/terraform-provider-sops/releases/download/v0.7.2/terraform-provider-sops_0.7.2_linux_amd64.zip"

  curl -s https://registry.terraform.io/v1/providers/hashicorp/helm/2.10.1/download/linux/amd64 | jq .download_url
  "https://releases.hashicorp.com/terraform-provider-helm/2.10.1/terraform-provider-helm_2.10.1_linux_amd64.zip"


https://github.com/hashicorp/terraform-provider-aws/issues/3...

The size is what you get when you add every single AWS Go client into one binary.

Each service client like 1-2MB. But when you have 200 services....


I think the providers might actually come from GitHub because the registry insists you have them as release assets


That indeed seems to be the case. Seems that for community providers, HashiCorp is serving up JSON which refers to GitHub download links.

Here’s a sample size of 1: https://registry.terraform.io/v1/providers/spacelift-io/spac...

According to the provider registry protocol, which I have previously implemented for internal hosting (in an afternoon of writing a single file of Python): https://developer.hashicorp.com/terraform/internals/provider...


For their own managed providers they no longer provide binaries in GitHub releases, and serve them from their own servers instead. Which feels like a trap BTW.


I didn't realize that, and extra weird they publish the SHA256SUM file as a release artifact but it references 14 zip files and the manifest.json so, ... thanks?

But, in their defense, installing an "unofficial" provider (or build!) into TF is some JFC so there's that. We'll just add that onto the damn near infinite pile of "I hope OpenTF fixes ..." things


The AWS Go SDK is the vast majority of that bulk. In general go binaries can get pretty big but AWS has hundreds of services with thousands of APIs and it’s all going to have to get included in the AWS provider.


Also AWS has so many services that the SDKs are mainly generated from json descriptions + nice wrappers on top. That leads to a different and less abstracted type of code than you'd write yourself - which leads to bigger compiled objects.

Ruby had this problem too and at some point split the SDK into multiple gems so you don't have to install everything.


The Azure sdks are the same. Auto generated from some underlying description. Then for backwards compatibility, every previous version is its own complete copy, all included in one single bundle.


The AWS Go SDK is actually a case study on making everything into pointers


It’s really a case in the abject inadequacy of the Go type system.


The appeal of a no-ops workflow for basic frontend apps is there. We’ve got a bunch of static docs sites built with vuepress and docusaurus that are a great fit for this. Not that we can’t whip up a CDN with terraform and a CI/CD pipeline with preview environments, but my team would rather spend its energy on our core product.

However we’ve been badly burned by vercel, netlify and render.com switching their pricing models to user based instead of infrastructure based pricing. We’re migrating to AWS amplify right now, which also happens to be wonderfully integrated into our wider IT landscape with an AWS landing zone, automated internal chargeback etc. It’s 90% the same for our use case and charges only for infra at standard AWS rates.

At this point I’m starting to wonder why it isn’t more popular.


> At this point I’m starting to wonder why it isn’t more popular.

The last time I tried it, most of the "automagic" quickly turned into "do everything by hand if you don’t want to get burned, and oh, we say we handle this use case but, as obliquely mentioned in passing, you really shouldn’t use it. Also, we don’t see the point in having our CLI tool spit anything more than the most generic errors.

What? What do you mean access control? What is that?"

I ended up on Firebase.

And, on my current project, found a single-table DynamoDB to be more reliable and predictable.

But perhaps things have improved since then?


Amplify is the most vendor-specific locked in stack you could possibly choose.

If you don't want to be burned by vendors changing the pricing model on you, don't choose a proprietary stack to build on.


I’m pretty confident of all the players in that market AWS is the least likely to pull that move.


It's obviously a trade off. But we've been running a GitHub Action/Gitlab Job/pick-your-poison task, to just dump build output on an FTP server. It's just static websites, and it has never broken.

In my mind, this is so basic, that anyone (who is a developer), should be able to do it. But that's just my take.


I've been wondering what the limits of the user-based pricing actually are

If it's just the number of people who can go in the UI and press the button (when it automatically gets deployed from git anyway), or change env secrets, can't you just have a single admin account and call it "one user"? That could be a single actual person (how often do you change those things?) or could even be a single account shared by the whole team

What am I missing?


On netlify iirc pricing was for every git committer that was allowed to trigger a deploy.

Render.com was more lenient in a model like you describe but still we have different people manage different apps and it adds up.


Ouch. By git committer sounds brutal


It’s simple. It’s not more popular because of bad UX. I would not be surprised if the best customer that Vercel chasing is also a customer that can add these costs to their customer or take a hit on the margin.


Bad UX. Mediocre DX. Average documentation. Unreliable tools.



> German CVs basically require photos and other information

I‘not sure where you got that from, but that’s just false. Most employers I know actively ask candidates to not submit irrelevant info like photos, gender, religion etc. in their applications as any info is a liability for an employer.

Look up AGG law and how all too easy it is to get a massive slap on the wrist for even the slightest hint of discrimination. Add PII issues on top.


Photos are still on > 50% of resumes I see at a medium-sized Berlin tech company. Age is also common. I can't imagine this is less common in the south and I know it's more common outside of tech. Both are still on most "how do I prepare a German resume?" guides for foreigners.

I think the photos are awful but they don't present any special PII issues. Resumes already contain names, addresses, and phone numbers and therefore need a ton of scrutiny anyway.


I don't know how common it is in Germany, but there is a "european format CV", official EU site where you can compose one:

https://europa.eu/europass/en

the form has top left a placeholder for a photo (though I believe it is not compulsory to add one).

I think many people will use that or some copy-paste format deriving from it.


Other commenters have covered the workload lock-in angle pretty well. Using Kubernetes as a target platform for your application already gives you a decent shot a workload portability. Keep in mind though that some K8s APIs are leaky abstractions. You pay with lock in into K8s of course. At the end of the day, lock-in is a matter of tradeoffs.

An often overlooked angle is the "organizational lock-in" to the cloud. Adopting the cloud in any serious capacity with more than a handful of teams/applications means that you will eventually have to build up some basic organizational capabilities like setting up resource hierarchy (e.g. an AWS Organization with multiple accounts), an account provisioning process, federated authentication, chargeback... See https://cloudfoundation.org/maturity-model/ for an overview of these topics.

To be honest I have seen quite a few enterprise organizations that went through so much organizational pain integrating their first cloud provider that implementing a second provider is not really that exciting anymore. Now of course, if you plan on eventually leveraging multi-cloud anyway you can save yourself a lot of pain by setting things up with multi-cloud in mind from day one.

A good read on the topic is "Cloud Strategy" from Gregor Hohpe https://architectelevator.com/book/cloudstrategy/


notion.so, also has integrated support for mermaid diagrams, decent syntax highlighting etc.


Learn from my mistakes picking up terraform as a software engineer thinking "it's just a better YAML"


It's so saddening to see how the Kubernetes hype-cycle follows OpenStack and all the fundamental problems still seem unsolved. I sometimes feel like its just the same story playing out 5 years later, one layer up to the stack (IaaS -> CaaS) and with other fools to fall for it (with OpenStack it was sysadmins trying to run a control plane, with Kubernetes its devs trying to run infrastructure).

The abstractions we have available to build and run distributed systems may have improved, but they still suck in the grand scheme of things. My personal nightmare is that nothing better comes along soon.

> - Is it the networking model that is simple from the consumption standpoint but has too many moving parts for it to be implemented?

Many poor sysadmins before us have tried to implement Neutron (OpenStack Networking Service) with OvS or a bunch of half-assed vendor SDNs. Or LBaaS with HAProxy.

> - Is it the storage model, CSI and friends?

I mean, the most popular CSI for running on-premise is rook.io, which is just wrapping Ceph. Ceph is just as hard to run as ever, and a lot of that is justified by the inherent complexity of providing high performance multi-tenant storage.

> - Is it the bunch of controller loops doing their own things with nothing that gives a "wholesome" picture to identify the root cause?

Partially. One advantage the approach has is that it's conceptually simple, consistent and feels easy to compose complex behavior. The problem is that Kubernetes enforces very little structure, even basics like object ownership. The result is unbounded complexity. A lack of tooling (e.g. time travel debugging for control loops) makes debugging complex interactions next to impossible. This is also not surprising, control loops are a very hard problem and even simple systems can spiral (or oscillate) out of control very quickly. Control theory is hard. David Anderson has a pretty good treatise of the matter https://blog.dave.tf/post/new-kubernetes/

Compared to OpenStack, Kubernetes uses a conceptually much simpler model (control loops + CRDs) and does a much better job at enforcing API consistency. Kubernetes is locally simple and consistent, but globally brittle.

The downside is that it needs much more composition of control loops to do meaningful work, and that leads to exploding complexity because you have a bunch of uncoordinated actors (control loops) each acting on partial state (a subset of CRDs).

The implementation model of an OpenStack service otoh is much simpler because they use straight forward "workflows", working on a much bigger picture of global state, e.g. neutron owning the entire network layer. This makes composition less of a source for brittleness, not that OpenStack still has its fair share of that as well. Workflows are however much more brittle locally, because they cannot reconcile themselves in case things go wrong.


This. To be fair that info is very hard to discover on the website (I looked it up in the docs code that generates the website because I found that easier to parse like a database of all models).

I own an L10 and am very happily running valetudo on it. Someone should make a business out of selling them pre-flashed with valetudo for a less technical audience…


> Someone should make a business out of selling them pre-flashed with valetudo for a less technical audience…

Please don't. Attracting a less technical audience to the project would be immensely harmful to it. It is already quite difficult to deal with issues that technical people might have. You simply do not want everyone to use your open source project. There is also no point in doing so. Why would one want to make their own life harder?


I don't know if by "make their own life harder" you mean the maintainer (you) or the user, but, as a user, valetudo has saved me. The original Xiaomi app is buggy, slow, and phones home, but Valetudo just works locally. It's amazing, thank you!


> Someone should make a business out of selling them pre-flashed with valetudo for a less technical audience…

From Why Valetudo[0]:

  First of all, please do not try to convince people to use Valetudo.

  We all know how terribly it usually turns out when people try to
  convince their friends to use linux on their desktop. Using Valetudo
  only makes sense if you understand its goals and feel like they are
  important to you. Everything else will fail.

  It is perfectly fine to continue using the cloud if you don’t really
  care about its downsides. Do not flame people for doing that. You 
  can be a bit snarky about downtimes, lag and other cloud shenanigans
  though :)
[0] https://valetudo.cloud/pages/general/why-valetudo.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: