I really would not recommend Azure unless you are going for an all-Microsoft, all-the-time strategy. Their hosted SQL Server offerings have actually been really good, and IMO is one of the best parts of the platform.
(We've used Azure for 3 years or so, AWS for longer).
Azure's USP is basically the deep integration into the MS ecosystem at all points. Azure is relentlessly promoted to every MS technical professional through all of the learning channels (and MS developer outreach is the best), Azure is integrated into the excellent developer experience of .NET and Visual Studio, hosted SQL Server on Azure is really nice (doubly so because it avoid the licensing quagmire), Azure with PowerShell is a fantastic CLI experience, etc. etc. Lots of reasons to love Azure if you are an MS specialist.
If you are outside the MS ecosystem, then you aren't getting much of that. For you, Azure is a less mature AWS with less support by and for your ecosystem. The situation with hosted MySQL on Azure is one example, but it is a two-way street: Microsoft loves it's own stack more, and folks outside the MS ecosystem love Azure less, partly because AWS is already the established default.
I could list papercuts and issues that I have had with Azure itself, but in each case I'm pretty sure or certain that AWS has had equivalent issues earlier in it's life, so it feels a bit unfair to do that.
I just met an azure representative in a event and they said managed mysql is in private beta right now. It will take them around 3 months to get it out if it. Even I'm waiting for it
While not Microsoft offerings, there are some options available: my company Aiven (https://aiven.io/postgresql) offers managed PostgreSQL service on Azure as well. ElephantSQL (https://www.elephantsql.com/) is an another provider with hosted Postgres on Azure.
Very exciting, but high availability, instance cloning, and read replicas are listed as "This functionality is not yet supported for PostgreSQL instances" in the docs. Seems like those need to be available before it's a realistic option for production workloads.
This was a tough call for the team: ship now for the folks that are okay with that, or wait until it's fully baked. They chose the former, which certainly means it's not a great idea for serious production load, but that's also true of it simply being in Beta. You won't see it go GA before it's really ready for production work (including replication, backups and so on) but I think it was a reasonable choice to get people started.
Disclosure: I work on Google Cloud (but not Cloud SQL).
I realize you can't commit to any particular dates, but is your feeling that this is .... single digit months away, or multiple digit months away, from being out of 'beta'?
Good question! Because of those important bits of functionality that will keep rolling in during the Beta, I'd say single digit months to the most important ones.
As a point of clarification though Beta => GA is about production hardening. Products can't go to GA without demonstrating that they've met their SLOs continuously for several weeks. Products can take longer than that if they feel they should make more tweaks first (and pgsql certainly qualifies), but we want to both ship quickly and get out of the reputation for perpetual Beta.
It's a great idea. Customers can start using it today to host the databases for their dev and QA environments, which will help suss out any issues with it before GA.
Seems like there is some confusion going on about the Free tier on a post[1]. Can you please comment there on whether the 'Always free' is free only during the first 12 months or forever?
Also, if I have been using Google Cloud since before today (when free trial was $300 over 60 days), am I still eligible for the 'Always free' tier?
From the page linked and the FAQ, it looks like it but many people on the thread I've linked are thinking otherwise.
Sorry, we've made updates to clarify. Yes, you can use during and beyond the free trial. Even though your free trial has expired, you can still use the Always Free products.
Another quick question. Is there anyway to see during instance creation/usage if it is part of the free tier?
When I create a f1-micro instance, it shows the usual $5/month billing, but doesn't indicate any kind of 'free tier' usage. How can we know if any service we use will be a part of the free tier for sure? I guess I can wait a few days to see if any costs accrue, but I was wondering if there was a better way for everyone.
Is there a way to see if the Cloud SQL team is planning to add more extensions? I'm mostly interested in citext myself but seeing that many people mentioned extensions, it would be nice to have some visibility in that.
More extensions is something we definitely plan to do.
No timeline, partially because we don't know ourselves. Our main priority for next few weeks will be ensuring everything is nice and stable. Once we are confident in current feature set we will start adding more.
You can watch https://cloud.google.com/sql/docs/postgres/release-notes for news.
Regarding being beta, the flexible environment instances for App Engine are/were beta (I swear it was just a couple days ago, it was labelled beta, but not seeing it now). All the same, a lot of people are building towards that (ie Docker).
As someone who interacts with PostgreSQL on-prem and cloud deployments frequently, I'm curious on the Hacker News community's thoughts on one question.
AWS has a fairly mature RDS Postgres offering. What would motivate you to use Google Cloud Postgres instead?
I've heard of four reasons from users: not on AWS, don't want to be locked into one vendor, cost, and better support. Do any of these reasons resonate with you or are there others?
What motivates me personally is not that I think Google Postgres will be better than RDS (which is great for us so far), but because I want to move to GCP for other reasons (pricing, better UI & cli, GKE, BigQuery) and the managed Postgres RDS was the only thing that was important enough to keep us on AWS
I just started using Google Postgres and, as of today, there is no way to easily get your PG data into BigQuery via any Google cloud service. You have to create your own CSV from your PG data, then upload that into BigQuery. I imagine (hope) this will change with additions to the Export options of PG instances.
I'd consider Google Cloud because the documentation, tutorials and verbose config (on most services) on AWS are a fairly substantial barrier. Has you ever gone through trying to wire up a Lambda function to talk to a S3 bucket and API getway endpoint? It's not fun. Even getting ElasticBeanstalk running with Docker isn't really easy.
I think Google Cloud is the middle ground between Heroku and AWS. It has potential to really gain traction with developers who want to get stuff shipped and not wear DevOps hats full time. What I'd like to see is Google adopting more modern software on their platform (Python 3) and trying to move things out of beta more quickly. You can't sell beta to management easily.
I'd consider Google Cloud because the documentation, tutorials and verbose config (on most services) on AWS are a fairly substantial barrier. Has you ever gone through trying to wire up a Lambda function to talk to a S3 bucket and API getway endpoint? It's not fun.
Strongly agree. I think that using an orchestration tool like Ansible or Cloud Formation is basically essential for AWS, because so many of the services have these interdependencies.
> Even getting ElasticBeanstalk running with Docker isn't really easy.
Yeah, we tried it. Not a great experience, and using an AWS proprietary container hosting system when everyone else is standardising on Kubernetes is not remotely attractive. I'm really looking forward to trying App Engine Flexible and GKE. If we can get really good application container hosting from GCP, then that will be a strong pull to migrating whole stacks.
I'd say the Elastic Beanstalk with a single container is very easy... zip your project with a Dockerfile, and it will create the container...
However, man that Awsrunfile or whatever it is called, and the two versions that are incompatible with the different EB versions, and it's painful... the hosted container solution equally so.
GC's docs and tools just seem so much more straight forward (as do Azure's for that matter).
There are lots of reasons to prefer Google, but ultimately there is one, single reason I want to switch:
The ability to cap costs.
With AWS, by the time you get a billing alarm, you could be thousands of dollars in the hole and there's nothing you can do about it. Avoiding that risk will help me sleep a little better.
I thought GCloud only had spending limits for App Engine services (including, apparently, Cloud Datastore even when used separately from App Engine), but not other services.
Thanks for alerting me to that possibility. If that's the case (can anyone confirm for sure?) then switching is probably not worth the switching costs for me right now.
As a Google customer (of both Compute/Container Engine and Cloud SQL), I'd rather use a Mysql instance hosted next to my application on Google's network than a Postgres instance that's an internet-hop away in Amazon's cloud. If I was currently running my apps on AWS that's where my DB would live too (and I'd be using RDS Posgres there).
The other reasons listed are an order of magnitude less important for my cost/benefit calculus (though things like cost and support will definitely matter to orgs bigger than mine).
I would love to see your own cloud offerings on Digital Ocean and on Google Cloud. Hopefully, with that said, we'll also see much more reasonable pricing as well!
I'm curious what you'd like to see in terms of pricing from Citus?
Yes, we do start at a higher end $990 a month, but we're very much focused on heavier workloads. Typically users come to us at 100 GB or more. If you're well below that we encourage you to stick with single node Postgres either RDS or Heroku Postgres. Once you get close to a $1k a month spend on those then we can start to make sense to consider.
Yes, you can absolutely get that much storage for that price. Though what your application needs in terms of memory/cores varies quite a bit on the application. We've seen customers that were spending $800 a month on RDS, migrate and get 2-3x performance boost for the same dollar spend.
Don't disagree at all that if you're at a lower volume of data that RDS is a great option for what you get. Depending on your application needs though as you scale and need more data in cache or more cores doing work, though this all depends on your workload. You can have 3 TB of cold storage and that work fine for your application and cost you under $1k a month on Aurora, or you could have needs with 100 GB of data that you need all queries served from cache in milliseconds. It's very unlikely you'll get that for $260 a month on RDS even at as little of 100 GB of data. It just depends on your application workload, for as long as you can you should just keep scaling up because it's much easier.
We very much focus on when you start to encounter that ceiling where scaling up starts to become prohibitive and performance is key not just storage–which can be as early as 100 GB, and is increasingly common the further you get beyond that.
Yes, it is wise to do that. If you don't want the overhead of running your own SQL instance, you definitely don't want the overhead of running it on kubernetes; it's quite tricky (but the addition of StatefulSets in 1.5 has made it easier).
"The Cloud SQL Proxy allows a user with the appropriate permissions to connect to a Second Generation Cloud SQL database without having to deal with IP whitelisting or SSL certificates manually."
I think what most people mean when they say that is "don't put the DB in Docker", so don't use Docker to host Postgres. Just because your app is hosted by Docker doesn't make it more or less wise to connect to a database. It just depends on if you need a database or not.
This is exactly what I plan to do. Not really because of any specific issue with Docker, it's just that databases have very different lifecycles than your typical container.
Sorry for hijacking, but is there an issue with Cloud SQL right now? I keep getting authentication issues from the GCP dashboard even though I've logged in with 2FA multiple times and when I try to visit my Cloud SQL instances, it just says failed to load.
is this cloud sql for postgres autoscale auto replicate fully managed hands off like app-engine and datastore or does this need to be managed manually?
The code is pretty much unmaintained at this point, it's a pain in the ass to build, besides the v4 UUID generation function everything else in there is useless, and the v4 UUID generation function is 6x slower than the one in pgcrypto[1].
There's no reason to use it and incorporating it into projects just creates headaches down the road.
I believe that type 5 (and I guess type 3) UUIDs both have legitimate use cases, especially when data migrations are involved. Both of these types are supported by uuid-ossp but not by pgcrypto.
If you need plv8, high availability or replication features today in Google Cloud, our Aiven PostgreSQL service may fit your needs, have a look at https://aiven.io/postgresql
One thing is really strange in this thread is that almost every opinion against Google's customer support are down voted heavily. Google advocates always say that GCP support is better. Of course they have to be better because they are the underdog in the cloud war. I can't buy the logic that GCP is Google but not that Google. What would happen if they could dominate?
Don't get me wrong. GCP as a product is really awesome. I've been using GCE and Datastore for a project and they just work. I just can't trust Google enough to bring all my works to their cloud.
Probably because it's something that comes up in every discussion about GCP, just like Go and generics.
And some of these comments are pretty useless. Like this [1] one. What's even the point? If you can't offer anything interesting, not even an anecdotal data point about reliability, why even contribute?
There's every sign that Google is serious about their IaaS. For example, Google is already dogfooding GCP heavily. According to googlers, a bunch of public Google products run on GCP.
Also, it's worth mentioning that GCP support is very good these days.
Sure, Google is no longer non-evil, and they are deservedly notorious for killing products, but GCP is a different category altogether. They deserve being given the benefit of the doubt in this case, especially as IaaS world really benefits from competition.
That's great, but nobody is going to care all that much if Google Registry or Memegen is down for a few hours. The concern is that Google is asking customers to put mission critical applications on a system that Google itself doesn't put mission critical applications on.
We have a 24/7/365 5-minute-to-ACK pager rotation. ICANN and our customers certainly care if random parts of the Internet start dropping off. Our annual total downtime is measured in minutes.
I can tell you from personal experience that GCP is suitable for these purposes.
Which random parts of the Internet start dropping off if Google Registry is down? I'm under the impression that there wouldn't be any news articles (or even tweets) if Google Registry is down for a few hours and that hardly anybody would notice at all. Please explain what I'm missing.
If the registry goes down, WHOIS, domain checks, and DNS/nameserver updates all start failing instantly. Anyone looking to buy a domain, or update an existing one, will get error messages. If someone happens to be caught in the middle of, say, migrating hosting providers as this happens (which is statistically likely when there are a large number of domain names under management), then they'll be unable to update DNS to point to where the domain name should now go. As the outage grows in length, you'll gradually accumulate more and more stale nameserver entries that cannot be updated, resulting in an increasing number of domain names failing to resolve properly, and thus dropping off the Internet. Additionally, ICANN has stability and uptime requirements, to the point where your gTLDs can be taken away from you if you're doing a poor job of running your registry.
Now, granted, we aren't currently running any hugely important TLDs at the moment, but that won't necessarily continue to be the case going forward, plus we aren't the only ones to be using our codebase ... http://www.donuts.domains/donuts-media/press-releases/donuts...
> Now, granted, we aren't currently running any hugely important TLDs at the moment,
That's exactly the point. Google doesn't put anything mission critical on GCP, which is a huge red flag for anybody putting mission critical applications in a cloud.
> GCP as a product is really awesome. I've been using GCE and Datastore for a project and they just work. I just can't trust Google enough to bring all my works to their cloud.
I work in GCP and can say that we definitely realize this is an issue and are doing our best to address it. We definitely know that trust has been lost and it takes time to regain it, but hopefully you'll give us a chance again sometime.
I am fairly new but it certainly feels as though GCP is a bit different than the other parts of Google. I suspect this is especially true since Diane Greene came. It definitely feels like everyone knows we need to adopt a different culture to be able to sell to enterprise customers.
I'm in the same boat - I felt burned by GAE from way back in the day and so passed GCP over for AWS. After then making a POC zone day I was blown away by how far Google had come and so moved all of my next project over to GCE/GKE.
Our experience has been great with GCP's support and I've interacted with several of their PM's by just using twitter and HN.
Support for their free consumer products is completely different from a paid enterprise offering like GCP. This is true at pretty much every major company that serves both groups.
Maybe because there's a lot of people here who have positive experiences? If GCP has good support, then comments claiming otherwise based on nothing but generalisations of Google's support for other products should be down voted – untrue claims don't add to the conversation, they detract from it.
It's not actually mongodb. It's the DocumentDB nosql offering, which is very good, but just means that it takes the mongodb wire protocol and supports most operations. It is not the same thing.
The company's irrevocably ingrained customer support culture will ensure they'll never be a serious AWS competitor, not with all the engineering and money in the world
All reports indicate that GCP is run very differently from the other products.
They had a rocky start as they were apparently ramping up the support organization, but these days I'm hearing nothing but praise.
They're getting very good at transparency. They have numerous mailing lists [1] ("Google Groups", ugh) for things like product release notes and planned changes. They just opened a public issue tracker [2] that covers the entire Google Cloud Platform.
If you're into Kubernetes/GKE, the Google team is also very active and responsive.
Try hosting a web site that serves Iranian people or a variety of other embargoed countries on GCP. You can't, nor will you find any hint of it documented anywhere, or any legal indication things must be this way.
At the simplest level when I buy compute time that looks like everybody else's compute time, I don't want to find after the fact that its provisioning is entangled in some weird corporate ideology that is now impacting my end users, left waiting to be discovered by the unsuspecting victim long after they've made plans around GCP.
(This is a true story from 2 months back, same shit used to happen with App Engine all the time)
That's because Google, as an American company, needs to ensure it doesn't violate certain trade embargoes, trade export law and other sanctions imposed on specific countries, like Iran.
Google restricts access to some of its business services in certain countries or regions, such as Crimea, Cuba, Iran, North Korea, Sudan, and Syria. If you try to sign in to these services from these countries or regions, the following error appears:
You appear to be signing in from a country where G Suite accounts are not supported.
Certain Google services, such as Gmail, might be available in these countries or regions for personal use, but not for business or education use.
Though some of the Google services are available for personal use in those countries, commercial use is restricted. A cloud would count as commercial use, even if it's only to host your own stuff.
GCP is far from the only cloud to suffer from it and many other services impose this too. The Docker Hub does it too for example.
Their customer support for GCP is top notch and they re-organized their entire support structure this year so each account has an account executive, SREs, etc. I've been very pleased the last year I've used GCP after migrating from AWS.
Before the reoganization I'd send in quota updates and they'd take 1-3 days. The last quota bump I did took 7 minutes.
This might be true, but AWS's support is not good. To get good AWS support for when their ELBs aren't working, or when half of your EBS volumes have multi-second IO latency, you need to pay for premium support, which is 10% of your bill.
A few months ago I would've agreed with you. After trying out GCP, interacting with support twice and speaking with a few other companies using the platform I've changed my tune.
I've had overall pretty good experiences in my year on GCP, particularly recently. I posted a question in Slack and one of the engineers on the product PM'd me back within 15 mins to troubleshoot the problem.
If you have to raise a ticket it's a bit more variable, but on the whole I've had every issue resolved satisfactorily.
The killer for me is PITR (point in time recover). A few years ago a colleague triggered and UPDATE without WHERE and PITR saved our sorry asses (on MS SQLServer, a fair product despite my antipathy MS).
Postgres has it, but it is kind of a pain to setup correctly and I love RDS because I don't have to deal with the setup anymore.
> Postgres has it, but it is kind of a pain to setup correctly and I love RDS because I don't have to deal with the setup anymore.
Check out barman from 2ndquadrant, we've been running it in production for 2 years now - setup was crazy easy and the one time I had to do a restore (for the same reason, UPDATE without a WHERE) it was no-fuss to get it done.
Of course, don't let me stop you from using RDS - but there's certainly user friendly backup solutions for Postgres when you're not :)
One of the most annoying things for me about Postgres when I switched from MySQL is that in the former you can't put LIMIT clauses on UPDATE/DELETE statements.
When I'm doing surgical work on a DB, I always, always want to do something like :
- SELECT statement showing me the relevant rows (and rowcount=N)
- UPDATE/DELETE statement with a corresponding limit equal to N+1
It gives nice, easy peace of mind that you aren't accidentally wiping out your whole table because you messed up the logic on your WHERE clause. Of course if you made a mistake, you probably still killed N records, but often that is much much better than killing the whole table (and maybe also downstream tables that are linked with FKEY cascade delete!)
If you are that concerned, put "begin;" before your command, check that the updated/inserted number is reasonable, and then either do a commit or a rollback. That way you also didn't screw up N+1 totally random rows. You can even put a returning clause on the update to see which rows you were affecting.
Or if you can't afford to lock the table in the process, before you run UPDATE {foo} WHERE {bar} you back up the relevant rows into a temporary table with SELECT * INTO foo_backup WHERE {bar}.
None of this should cause a table lock: at best you will just lock the affected rows and at worst you will end up with a predicate lock on the where clause.
It seems to me that the amount of work required to fix the above, in most cases, would be almost equal to the amount of work that would be required if you hadn't used the LIMIT clause.
From their homepage, the features not available for Beta (because Cloud SQL for PostgreSQL is in Beta, some PostgreSQL features are not yet available):
Excited to see this, but holy shit is it unclear what expected costs could be. It seems like it might be on the cheap/affordable side of things, but it's not obvious. I almost prefer to just use Compose.io because at least the pricing is clear.
when ha would cost the same i.e. ~20€ than that would be amazingly cheap.
i mean on aws you would pay the same (20 €), but you need to pay 1 year upfront.. and if ha would be like 16 € you would only be a little bit more expensive than a 3 year aws rds offering (1€ per month), which is less a problem since the storage is cheaper and you are way more flexible (canceling and recreating to a bigger instance)
What's up with cloud instance naming schemes? I get this is kind of similar to Amazon, but man, I bet these are really unwieldy in conversations esp. if someone is new to a platform.
We know. That's why PostgreSQL pilots hopefully clearer pricing structure where you choose how much CPU and RAM is needed and pay per CPU and GiB of ram. No more `db-nX-<something>-X`.
You'll still see instance size names for a while though. I think D0-D32 are for first generation of CloudSQL (which is MySQL only). db-* are for second generation and match GCE instance names.
There's at least 2 performance dimensions for provisioning an instance: vCPUs and memory. Often also GPUs, local SSDs, local HDDs. And then you have multiple generations of hardware, which aren't 100% comparable to each other.
So coming up with a "fully systematic" naming scheme is not really possible without listing all the parameters, and then you lose the benefit of having a name.
Is there a reason Google is so slow to compete with AWS? This seems like another example of where they're playing catch-up. Meanwhile, I'm blown away that Node is still just in Beta (with no SLA / deprecation guarantees) when Heroku and AWS have supported it for years.
They're playing catch-up in some areas, but in others (networking, containers, pub/sub, BigQuery, authentication, CLI tools), they're ahead (and AWS is far behind).
Whereas AWS aggressively targets the more traditional enterprise stacks (like running a big single-node RDBMS in the cloud), Google seems to give priority to more modern, decentralized tech.
It seems Google is very interested in making GCP a platform for companies that really need to build cloud-based distributed systems first and foremost. It's a cloud-native IaaS platform. AWS, which I also use extensively FWIW, has always felt more like a platform for porting existing on-premise applications onto. You can build atop of AWS at any stage of the game, but it never quite feels like it's anymore than a bunch of very integrated parts with various degrees of quality.
Using the example I have worked with, BigQuery, it appears Google is building platforms specifically for the cloud. BQ does not use the context of an instance, whereas RedShift is simply a column store hosted on specific VM sizes. It takes time to build new solutions compared to simply moving the hosting location.
I don't expect this is universal across all products, but one example.
Will it be available in the same private network as the rest of our GCE VMs? Last I checked, Google Cloud MySQL runs over the public network, which is a big pain in the ass for access control.
Cloud SQL is Google's counterpart to RDS. They support both Postgres and MySQL.
From what I've heard, Google is running a vanilla version of Postgres. They don't offer any features beyond what mainline Postgres provides (unlike AWS Aurora, which runs modified versions versions of MySQL and Postgres).
> Is Google contributing anything back into Postgres codebase ?
As a general thought, it's probably be a good idea to wait (say) 6 months until this new Cloud SQL for PG offering has matured + their team has more PG experience.
Places often contribute back to upstream projects as they get more involved in relevant Community, and it sounds like they're just starting out now.
I think use-case for Google Cloud SQL, RDS, Heroku, etc is a bit different from Citus and other distributed databases. It seems that Cloud SQL has very limited scalibility (32 processors, 200GB of RAM), so it might not be very good at usecases that your working dataset is in order of terabytes or more. Citus on the other hand has horizontal scalability and you can add more CPU power and RAM by adding another machine to your machines.
If my data were at order of 10GBs, I would choose Cloud SQL, RDS, etc. At order of 100GBs, I would try both Cloud SQL, RDS, etc. and Citus, etc. to see which one fits my usecase. At order of terabytes, I would choose Citus or some other distributed database.
(I'm a former Citus employee and Current Googler in a non-Cloud SQL team)
I feel like this is going to be an increasingly hard question to answer as more and more cloud providers with interchangeable services become available. It's like buying a car. You will run into someone that has used 2 or 3 different providers of the same service, but some one with hands on experience with them all, unlikely.
Both RDS and Heroku are aimed more at more modestly sized database in that you don't really scale beyond the vertical capacity of a single node. The prices are relatively comparable with RDS starts at around $20-30 a month depending on the instance type and Heroku Postgres at $50/month. Both services will get you an HA feature that allows for better uptime through automatic node failover.
Overall Heroku Postgres will probably feel a little like the Heroku platform: a little more polished and a little more "managed", but with fewer knobs to tweak, which can be both good and bad depending on your situation.
It's too early to say how Google's offering will shape up, but it'll likely be in the same vein as these first two. Lack of HA probably means that you should limit your production use of it, although it seems that the team intends to implement that eventually based off the service's documentation and other comments here.
Citus and Aurora come into play when you're looking to scale beyond a single node. Citus Cloud starts at $990 a month so you're not likely to come into it without some non-trivial requirements. Aurora is similar idea.
Citus' killer feature over something like Aurora is that instead of going ahead and forking Postgres wholesale, the product runs as an extension, which means that you're likely to get better compatibility going forward with new Postgres features.
Aurora is "compatible" which means that you'll be able to use psql and get access to common functionality, but are likely to see a divergence in support features. A similar situation is Redshift, which deviated around Postgres 8.0.2 [1], and at this point it's safe to say that it will never catch up.
Aurora isn't substantially more expensive than RDS Postgres. Caveat is that you can't run it on the really tiny instance sizes right now. On an r3.large with no reserve pricing you're looking at ~$200pm, or ~115 with reserve.
This is definitely going to be very useful when it's fully fleshed out but be warned that this initial beta doesn't support Google-managed replication or HA (to say nothing of an SLA, of course).
I'm the Ruby Developer Advocate for GCP. I've used App Engine Flex with Rails 5 on a couple projects recently. Nothing huge but one is in production (it is for a local non-profit). I've found it to be a great way to get things out the door quickly and I love the auto-scaling.
It doesn't have all the magic of App Engine Standard and deploy times are slower than I'd like (~5 minutes). But I'm okay with that if I can use my own database, any library I want, and I have full portability.
Right. Vitess was built by YouTube to solve its own scalability problems. It provides sharding and cluster-management functionality. In a way, it's orthogonal. We've talked about the possibility of making Vitess work on top of CloudSQL. Maybe it will happen some day.
Thanks for the computation - looks a bit more sane, but still - seems pretty expensive compared to a cheap VPS for the one-man-show scale, and quite expensive for 'we need 100s of those' compared with standalone server.
They haven't really pulled the rug out for anything on Google Cloud (there've been a few things replaced, e.g., the old Master/Slave datastore on App Engine.)
And while this doesn't have an deprecation policy applicable as a Beta feature, most GA Google Cloud features have a one-year deprecation policy, so even if they were retired, no one would have the rug pulled out.)
You seem to be confusing Google Cloud Platform with Google services. AFAIK there have been no GCP features removed after they have gone into General Availability.
Bet hey, keep the uninformed meme going if you like.
https://feedback.azure.com/forums/217321-sql-database/sugges...
https://feedback.azure.com/forums/217321-sql-database/sugges...
https://feedback.azure.com/forums/170024-additional-services...
PS: managed MySQL is currently the most requested additional service on Azure:
https://feedback.azure.com/forums/170024-additional-services...
.. and managed Postgres + MySQL are currently the third most requested feature in relation to their managed DB offering
https://feedback.azure.com/forums/217321-sql-database/filter...