"I haven't tried Heroku, but expect similar prices and nearly as high of a configuration burden."
Are you kidding me? Prices, sure, but comparing the configuration burden using AWS or GCP alphabet soup to a dead-simple Heroku setup is ridiculous. You set up something like hirefire.io for autoscaling, you connect your Github repo and that's it. You're done. Compared to weeks worth of headaches setting up Terraform Cloud and CI actions and massaging your state file and spending days tearing your headout over insane obtuse NAT configuration parameters? It's literally night and day.
Ya I think you're right. I regret not trying Heroku first.
I'm glad to have learned AWS, although I don't know that I'll build another cloud server. I think the biggest problem I found is that services tend to require each other. So I ended up needing an IAM for everything, a security group for everything, a network setting for everything.. you get the idea. A la carte ends up being a fantasy. Because of that interdependency, I don't see how it would be possible to maintain a cloud server without Terraform. And if that's the case, then the services just becomes modules in a larger monolith. Which to me, looks like cloud servers are focusing on the wrong level of abstraction. Which is why preconfigured servers like Heroku exist, it sounds like.
An open source implementation of Heroku running on Terraform on AWS/GCP could be compelling. Also a server that emulates AWS/GCP services so that an existing Terraform setup could be ported straight to something like Hetzner or Linode with little modification. With a migration path to start dropping services, perhaps by running under a single identity with web server permissions (instead of superuser). And no security groups or network settings, just key-based authentication like how the web works. More like Tailscale, so remote services would appear local with little or no configuration.
Also this is a little off-topic, but I'd get rid of regions and have the hosting provider maintain edge servers internally using something like RAFT combined with an old-school cache like Varnish to emulate a CDN. The customer should be free from worrying about all static files, and only have to pay a little extra for ingress for the upload portion of HTTP requests for POST/PUT/PATCH and request headers and payloads.
Oh and the database should be self-scaling, as long as the customer uses deterministic columns, so no NOW() or RAND(), although it should still handle CURRENT_TIMESTAMP for created_at and updated_at columns so that Laravel/Rails work out of the box. So at least as good as rqlite, if we're dreaming!
Edit: I don't mean to be so hard on AWS here. I think the concept of sharing datacenter resources is absolutely brilliant. I just wish they offered a ~$30/mo server preconfigured with whatever's needed to run a basic WordPress site, for example.
Are you kidding me? Prices, sure, but comparing the configuration burden using AWS or GCP alphabet soup to a dead-simple Heroku setup is ridiculous. You set up something like hirefire.io for autoscaling, you connect your Github repo and that's it. You're done. Compared to weeks worth of headaches setting up Terraform Cloud and CI actions and massaging your state file and spending days tearing your headout over insane obtuse NAT configuration parameters? It's literally night and day.