Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google Cloud SQL for Postgres (cloud.google.com)
755 points by wwilson on March 9, 2017 | hide | past | favorite | 232 comments


Your turn, Azure

https://feedback.azure.com/forums/217321-sql-database/sugges...

https://feedback.azure.com/forums/217321-sql-database/sugges...

https://feedback.azure.com/forums/170024-additional-services...

PS: managed MySQL is currently the most requested additional service on Azure:

https://feedback.azure.com/forums/170024-additional-services...

.. and managed Postgres + MySQL are currently the third most requested feature in relation to their managed DB offering

https://feedback.azure.com/forums/217321-sql-database/filter...


I really would not recommend Azure unless you are going for an all-Microsoft, all-the-time strategy. Their hosted SQL Server offerings have actually been really good, and IMO is one of the best parts of the platform.


Would love to hear more about your opinions on the first part.


(We've used Azure for 3 years or so, AWS for longer).

Azure's USP is basically the deep integration into the MS ecosystem at all points. Azure is relentlessly promoted to every MS technical professional through all of the learning channels (and MS developer outreach is the best), Azure is integrated into the excellent developer experience of .NET and Visual Studio, hosted SQL Server on Azure is really nice (doubly so because it avoid the licensing quagmire), Azure with PowerShell is a fantastic CLI experience, etc. etc. Lots of reasons to love Azure if you are an MS specialist.

If you are outside the MS ecosystem, then you aren't getting much of that. For you, Azure is a less mature AWS with less support by and for your ecosystem. The situation with hosted MySQL on Azure is one example, but it is a two-way street: Microsoft loves it's own stack more, and folks outside the MS ecosystem love Azure less, partly because AWS is already the established default.

I could list papercuts and issues that I have had with Azure itself, but in each case I'm pretty sure or certain that AWS has had equivalent issues earlier in it's life, so it feels a bit unfair to do that.


I just met an azure representative in a event and they said managed mysql is in private beta right now. It will take them around 3 months to get it out if it. Even I'm waiting for it


That is great news as I'm probably going to use it in a large project that will start in a few months...


While not Microsoft offerings, there are some options available: my company Aiven (https://aiven.io/postgresql) offers managed PostgreSQL service on Azure as well. ElephantSQL (https://www.elephantsql.com/) is an another provider with hosted Postgres on Azure.


Very exciting, but high availability, instance cloning, and read replicas are listed as "This functionality is not yet supported for PostgreSQL instances" in the docs. Seems like those need to be available before it's a realistic option for production workloads.


This was a tough call for the team: ship now for the folks that are okay with that, or wait until it's fully baked. They chose the former, which certainly means it's not a great idea for serious production load, but that's also true of it simply being in Beta. You won't see it go GA before it's really ready for production work (including replication, backups and so on) but I think it was a reasonable choice to get people started.

Disclosure: I work on Google Cloud (but not Cloud SQL).


I realize you can't commit to any particular dates, but is your feeling that this is .... single digit months away, or multiple digit months away, from being out of 'beta'?


Good question! Because of those important bits of functionality that will keep rolling in during the Beta, I'd say single digit months to the most important ones.

As a point of clarification though Beta => GA is about production hardening. Products can't go to GA without demonstrating that they've met their SLOs continuously for several weeks. Products can take longer than that if they feel they should make more tweaks first (and pgsql certainly qualifies), but we want to both ship quickly and get out of the reputation for perpetual Beta.


thanks! :)


It's a great idea. Customers can start using it today to host the databases for their dev and QA environments, which will help suss out any issues with it before GA.


I'd generally want at least my QA environment to be a replica of my productions environment, and I guess many others would too.

I don't want to be overly negative though - releasing now at least let's us play with it and see how it stacks up.


Many teams/companies have multiple QA environments...


Is there any formal statement of whether and how much notice might be given if Google decides to discontinue this service?


The message at the top seems to me to make clear is could be discontinued at any time with zero notice.

As soon as it gets out of beta, it will get an SLA and deprecation policy of hopefully 2+ years notice before it gets discontinued.


That's certainly the letter of the law, but it's pretty clear Cloud SQL is here to stay.


Hello!

This might is bit off topic on this thread.

Seems like there is some confusion going on about the Free tier on a post[1]. Can you please comment there on whether the 'Always free' is free only during the first 12 months or forever?

Also, if I have been using Google Cloud since before today (when free trial was $300 over 60 days), am I still eligible for the 'Always free' tier?

From the page linked and the FAQ, it looks like it but many people on the thread I've linked are thinking otherwise.

Can you please clear that up for us? Thank you!

[1]: https://news.ycombinator.com/item?id=13832519


Sorry, we've made updates to clarify. Yes, you can use during and beyond the free trial. Even though your free trial has expired, you can still use the Always Free products.


Thank you!

Another quick question. Is there anyway to see during instance creation/usage if it is part of the free tier?

When I create a f1-micro instance, it shows the usual $5/month billing, but doesn't indicate any kind of 'free tier' usage. How can we know if any service we use will be a part of the free tier for sure? I guess I can wait a few days to see if any costs accrue, but I was wondering if there was a better way for everyone.


Is there a way to see if the Cloud SQL team is planning to add more extensions? I'm mostly interested in citext myself but seeing that many people mentioned extensions, it would be nice to have some visibility in that.


More extensions is something we definitely plan to do.

No timeline, partially because we don't know ourselves. Our main priority for next few weeks will be ensuring everything is nice and stable. Once we are confident in current feature set we will start adding more. You can watch https://cloud.google.com/sql/docs/postgres/release-notes for news.

Source: I work on Cloud Postgres.


Regarding being beta, the flexible environment instances for App Engine are/were beta (I swear it was just a couple days ago, it was labelled beta, but not seeing it now). All the same, a lot of people are building towards that (ie Docker).


App Engine Flex just came out of beta today :)

https://twitter.com/googlecloud/status/839901710948589568


But in that case it was not without replication (ie useless).


(Ozgun from Citus)

As someone who interacts with PostgreSQL on-prem and cloud deployments frequently, I'm curious on the Hacker News community's thoughts on one question.

AWS has a fairly mature RDS Postgres offering. What would motivate you to use Google Cloud Postgres instead?

I've heard of four reasons from users: not on AWS, don't want to be locked into one vendor, cost, and better support. Do any of these reasons resonate with you or are there others?


What motivates me personally is not that I think Google Postgres will be better than RDS (which is great for us so far), but because I want to move to GCP for other reasons (pricing, better UI & cli, GKE, BigQuery) and the managed Postgres RDS was the only thing that was important enough to keep us on AWS


Same for us. We're migrating to GKE but are accessing Postgres over a gateway and leaving our DB heavy services on AWS... This was very good news.


Same here.

Plus RDS was the only reasonable option at that price tier (with high availability and multi-AZ). I think competition was sorely needed.

RDS was pretty much becoming a monopoly on this front.


Is [Compose](www.compose.com) comparable?

They have a HA postgres offering (not sure about multi-az) and I thought I'd heard good things about them in the past?


I just started using Google Postgres and, as of today, there is no way to easily get your PG data into BigQuery via any Google cloud service. You have to create your own CSV from your PG data, then upload that into BigQuery. I imagine (hope) this will change with additions to the Export options of PG instances.


I'd consider Google Cloud because the documentation, tutorials and verbose config (on most services) on AWS are a fairly substantial barrier. Has you ever gone through trying to wire up a Lambda function to talk to a S3 bucket and API getway endpoint? It's not fun. Even getting ElasticBeanstalk running with Docker isn't really easy.

I think Google Cloud is the middle ground between Heroku and AWS. It has potential to really gain traction with developers who want to get stuff shipped and not wear DevOps hats full time. What I'd like to see is Google adopting more modern software on their platform (Python 3) and trying to move things out of beta more quickly. You can't sell beta to management easily.


I'd consider Google Cloud because the documentation, tutorials and verbose config (on most services) on AWS are a fairly substantial barrier. Has you ever gone through trying to wire up a Lambda function to talk to a S3 bucket and API getway endpoint? It's not fun.

Strongly agree. I think that using an orchestration tool like Ansible or Cloud Formation is basically essential for AWS, because so many of the services have these interdependencies.

> Even getting ElasticBeanstalk running with Docker isn't really easy.

Yeah, we tried it. Not a great experience, and using an AWS proprietary container hosting system when everyone else is standardising on Kubernetes is not remotely attractive. I'm really looking forward to trying App Engine Flexible and GKE. If we can get really good application container hosting from GCP, then that will be a strong pull to migrating whole stacks.


I'd say the Elastic Beanstalk with a single container is very easy... zip your project with a Dockerfile, and it will create the container...

However, man that Awsrunfile or whatever it is called, and the two versions that are incompatible with the different EB versions, and it's painful... the hosted container solution equally so.

GC's docs and tools just seem so much more straight forward (as do Azure's for that matter).


There are lots of reasons to prefer Google, but ultimately there is one, single reason I want to switch:

The ability to cap costs.

With AWS, by the time you get a billing alarm, you could be thousands of dollars in the hole and there's nothing you can do about it. Avoiding that risk will help me sleep a little better.


I thought GCloud only had spending limits for App Engine services (including, apparently, Cloud Datastore even when used separately from App Engine), but not other services.


Thanks for alerting me to that possibility. If that's the case (can anyone confirm for sure?) then switching is probably not worth the switching costs for me right now.


As a Google customer (of both Compute/Container Engine and Cloud SQL), I'd rather use a Mysql instance hosted next to my application on Google's network than a Postgres instance that's an internet-hop away in Amazon's cloud. If I was currently running my apps on AWS that's where my DB would live too (and I'd be using RDS Posgres there).

The other reasons listed are an order of magnitude less important for my cost/benefit calculus (though things like cost and support will definitely matter to orgs bigger than mine).


Yes, costs is a big factor for me.

I would love to see your own cloud offerings on Digital Ocean and on Google Cloud. Hopefully, with that said, we'll also see much more reasonable pricing as well!


I'm curious what you'd like to see in terms of pricing from Citus?

Yes, we do start at a higher end $990 a month, but we're very much focused on heavier workloads. Typically users come to us at 100 GB or more. If you're well below that we encourage you to stick with single node Postgres either RDS or Heroku Postgres. Once you get close to a $1k a month spend on those then we can start to make sense to consider.


A multi node , 100gb postgres cluster can be had for 260$ per month if you choose yearly prepaid on RDS.

The price to performance ratio is fairly unbeatable.


Price to performance is a bit debatable.

Yes, you can absolutely get that much storage for that price. Though what your application needs in terms of memory/cores varies quite a bit on the application. We've seen customers that were spending $800 a month on RDS, migrate and get 2-3x performance boost for the same dollar spend.

Don't disagree at all that if you're at a lower volume of data that RDS is a great option for what you get. Depending on your application needs though as you scale and need more data in cache or more cores doing work, though this all depends on your workload. You can have 3 TB of cold storage and that work fine for your application and cost you under $1k a month on Aurora, or you could have needs with 100 GB of data that you need all queries served from cache in milliseconds. It's very unlikely you'll get that for $260 a month on RDS even at as little of 100 GB of data. It just depends on your application workload, for as long as you can you should just keep scaling up because it's much easier.

We very much focus on when you start to encounter that ceiling where scaling up starts to become prohibitive and performance is key not just storage–which can be as early as 100 GB, and is increasingly common the further you get beyond that.


which im agreeing on - I was trying to hint to make a tier for the 200$ users out there ;)


I use PostgreSQL on RDS. I wouldn't touch GCE for a year still it's had its kinks ironed out.


> it's a realistic option for production workloads.

Surely this is only true for certain definitions of 'production workloads'?


Well it is only in a beta state


Cloud SQL for Postgres is launching today and will be available for all users early next week.

Source: Work on Cloud SQL.


Any public timeline on HA feature availability, or way to be an early adopter?


The replication and high availability related documentation is here:-

https://cloud.google.com/sql/docs/postgres/high-availability

https://cloud.google.com/sql/docs/postgres/replication/

Any timelines on high availability support? This would be pretty helpful


Is it wise to use Cloud SQL + GKE? Some say it is not wise to use Docker for DB


Yes, it is wise to do that. If you don't want the overhead of running your own SQL instance, you definitely don't want the overhead of running it on kubernetes; it's quite tricky (but the addition of StatefulSets in 1.5 has made it easier).

If you're running a CloudSQL instance, you can use a sidecar container to manage proxying the connection to the database: https://github.com/GoogleCloudPlatform/cloudsql-proxy

"The Cloud SQL Proxy allows a user with the appropriate permissions to connect to a Second Generation Cloud SQL database without having to deal with IP whitelisting or SSL certificates manually."


I've been having rather significant latency going through the proxy (MySQL Cloud SQL) - has anyone else experienced similar?


> Some say it is not wise to use Docker for DB

I think what most people mean when they say that is "don't put the DB in Docker", so don't use Docker to host Postgres. Just because your app is hosted by Docker doesn't make it more or less wise to connect to a database. It just depends on if you need a database or not.


This is exactly what I plan to do. Not really because of any specific issue with Docker, it's just that databases have very different lifecycles than your typical container.


Half surprised plv8 isn't an in the box extension.


It could be a very useful one to have. To request support for an extension, start a thread on the Cloud SQL Discussion group. https://groups.google.com/forum/#!forum/google-cloud-sql-dis...


I dont see it enabled on my account - I still only see mysql. How does one get it ?


It will be available for all users early next week.


I guess that would explain why I cannot add a PostgreSQL instance today...


Sorry for hijacking, but is there an issue with Cloud SQL right now? I keep getting authentication issues from the GCP dashboard even though I've logged in with 2FA multiple times and when I try to visit my Cloud SQL instances, it just says failed to load.


is this cloud sql for postgres autoscale auto replicate fully managed hands off like app-engine and datastore or does this need to be managed manually?


Doesn't look like it.

I suspect that they intend that use case to be fulfilled by Cloud Spanner: https://cloud.google.com/spanner/


Awesome! This is something I was really hoping for.


finally.


Has anyone seen documentation that it supports PostGIS?

Edit: Nevermind, it does! https://cloud.google.com/sql/docs/postgres/extensions


I wish uuid-ossp was in there... Being able to generate type 4 and 5 UUIDs on the database side is extremely convenient, especially during migrations.


    gen_random_uuid()
Exists in the pgcrypto extension.

Only does UUID 4, which should work for many applications.


It includes pgcrypto so you can use the new gen_random_uuid() from there to generate UUIDs.

For the record, uuid-ossp is a steaming pile and there's no reason to use it for anything on a modern Postgres install.


What exactly do you mean by this? Your comment is uninformative without some substance.


The code is pretty much unmaintained at this point, it's a pain in the ass to build, besides the v4 UUID generation function everything else in there is useless, and the v4 UUID generation function is 6x slower than the one in pgcrypto[1].

There's no reason to use it and incorporating it into projects just creates headaches down the road.

[1]: https://www.postgresql.org/message-id/flat/52CF07C2.3080101%...


I believe that type 5 (and I guess type 3) UUIDs both have legitimate use cases, especially when data migrations are involved. Both of these types are supported by uuid-ossp but not by pgcrypto.


sadly, the only language extension is plpgsql, which is many less than aws, which happens to support plv8 as well.


What about citext? It's not on there.


I'm surprised to not see it (or other similarly popular data types) on there, but can't prove it doesn't work without trying it.


This is excellent news - am currently evaluating options for cloud Postgres. Only wish they'd support a few more extensions - https://cloud.google.com/sql/docs/postgres/extensions - e.g. plv8 would be great https://github.com/plv8/plv8


I'd happily help them be able to support it ...


Jerry, your work on plv8 is fantastic and I hope someone from Google takes you up on your offer!


Would just like to chip in and say I really appreciate plv8 and all the effort you put into it. :)


thanks! and me too!


If you need plv8, high availability or replication features today in Google Cloud, our Aiven PostgreSQL service may fit your needs, have a look at https://aiven.io/postgresql


are you hiring? :) I happen to know a guy who has a ton of postgis experience, and more than a ton of plv8 experience ...


Do you sign HIPAA BAAs?


Yes, please send an email to our sales or support address and we'll work out the details.


Thanks, sent! :)



plv8 and HA are my two big ones... The main reason I'd choose postgres over alternatives would be plv8.


One thing is really strange in this thread is that almost every opinion against Google's customer support are down voted heavily. Google advocates always say that GCP support is better. Of course they have to be better because they are the underdog in the cloud war. I can't buy the logic that GCP is Google but not that Google. What would happen if they could dominate?

Don't get me wrong. GCP as a product is really awesome. I've been using GCE and Datastore for a project and they just work. I just can't trust Google enough to bring all my works to their cloud.


Probably because it's something that comes up in every discussion about GCP, just like Go and generics.

And some of these comments are pretty useless. Like this [1] one. What's even the point? If you can't offer anything interesting, not even an anecdotal data point about reliability, why even contribute?

There's every sign that Google is serious about their IaaS. For example, Google is already dogfooding GCP heavily. According to googlers, a bunch of public Google products run on GCP.

Also, it's worth mentioning that GCP support is very good these days.

Sure, Google is no longer non-evil, and they are deservedly notorious for killing products, but GCP is a different category altogether. They deserve being given the benefit of the doubt in this case, especially as IaaS world really benefits from competition.

[1] https://news.ycombinator.com/item?id=13831966


What Google products run on GCP? That's news to me. I would think they would have announced that in the keynote.


Google Registry does. Site here: https://registry.google And you can even see our code: https://nomulus.foo

(I work on this project.)


That's great, but nobody is going to care all that much if Google Registry or Memegen is down for a few hours. The concern is that Google is asking customers to put mission critical applications on a system that Google itself doesn't put mission critical applications on.


We have a 24/7/365 5-minute-to-ACK pager rotation. ICANN and our customers certainly care if random parts of the Internet start dropping off. Our annual total downtime is measured in minutes.

I can tell you from personal experience that GCP is suitable for these purposes.


Which random parts of the Internet start dropping off if Google Registry is down? I'm under the impression that there wouldn't be any news articles (or even tweets) if Google Registry is down for a few hours and that hardly anybody would notice at all. Please explain what I'm missing.


If the registry goes down, WHOIS, domain checks, and DNS/nameserver updates all start failing instantly. Anyone looking to buy a domain, or update an existing one, will get error messages. If someone happens to be caught in the middle of, say, migrating hosting providers as this happens (which is statistically likely when there are a large number of domain names under management), then they'll be unable to update DNS to point to where the domain name should now go. As the outage grows in length, you'll gradually accumulate more and more stale nameserver entries that cannot be updated, resulting in an increasing number of domain names failing to resolve properly, and thus dropping off the Internet. Additionally, ICANN has stability and uptime requirements, to the point where your gTLDs can be taken away from you if you're doing a poor job of running your registry.

Now, granted, we aren't currently running any hugely important TLDs at the moment, but that won't necessarily continue to be the case going forward, plus we aren't the only ones to be using our codebase ... http://www.donuts.domains/donuts-media/press-releases/donuts...


> Now, granted, we aren't currently running any hugely important TLDs at the moment,

That's exactly the point. Google doesn't put anything mission critical on GCP, which is a huge red flag for anybody putting mission critical applications in a cloud.


That's only because we don't have any yet, not because we're afraid of GCP. When they launch they will be on our platform, same as all our other TLDs.


No idea. The information comes from Tim Hockin, Engineering Manager at Google: https://www.reddit.com/r/kubernetes/comments/5vuyls/apache_m...


A lot of internal apps (including Memegen) run on GCP. It's simpler to start coding and get something up and running on than bare Borg.


> GCP as a product is really awesome. I've been using GCE and Datastore for a project and they just work. I just can't trust Google enough to bring all my works to their cloud.

I work in GCP and can say that we definitely realize this is an issue and are doing our best to address it. We definitely know that trust has been lost and it takes time to regain it, but hopefully you'll give us a chance again sometime.

I am fairly new but it certainly feels as though GCP is a bit different than the other parts of Google. I suspect this is especially true since Diane Greene came. It definitely feels like everyone knows we need to adopt a different culture to be able to sell to enterprise customers.


I'm in the same boat - I felt burned by GAE from way back in the day and so passed GCP over for AWS. After then making a POC zone day I was blown away by how far Google had come and so moved all of my next project over to GCE/GKE.


Our experience has been great with GCP's support and I've interacted with several of their PM's by just using twitter and HN.

Support for their free consumer products is completely different from a paid enterprise offering like GCP. This is true at pretty much every major company that serves both groups.


Maybe because there's a lot of people here who have positive experiences? If GCP has good support, then comments claiming otherwise based on nothing but generalisations of Google's support for other products should be down voted – untrue claims don't add to the conversation, they detract from it.


I'm really excited about the future of GCP and that there is now a serious AWS competitor.


What's wrong with Azure?

edit: Seriously a down-vote for asking why someone doesn't like one of the major 3 cloud platforms? These threads can be pretty childish sometimes.


Azure doesn't offer Postgres instances. SQL Server and MySQL, but not Postgres.


MySQL isn't managed by Microsoft. It's managed by a 3rd company. Which makes me not even want to touch MySQL on azure.



It's not actually mongodb. It's the DocumentDB nosql offering, which is very good, but just means that it takes the mongodb wire protocol and supports most operations. It is not the same thing.


Azure is good but if you want managed PSQL it's not an option.


I tend to downvote posts which complain about being downvoted


The company's irrevocably ingrained customer support culture will ensure they'll never be a serious AWS competitor, not with all the engineering and money in the world


All reports indicate that GCP is run very differently from the other products.

They had a rocky start as they were apparently ramping up the support organization, but these days I'm hearing nothing but praise.

They're getting very good at transparency. They have numerous mailing lists [1] ("Google Groups", ugh) for things like product release notes and planned changes. They just opened a public issue tracker [2] that covers the entire Google Cloud Platform.

If you're into Kubernetes/GKE, the Google team is also very active and responsive.

[1] https://cloud.google.com/support/docs/groups

[2] https://issuetracker.google.com/


Try hosting a web site that serves Iranian people or a variety of other embargoed countries on GCP. You can't, nor will you find any hint of it documented anywhere, or any legal indication things must be this way.

At the simplest level when I buy compute time that looks like everybody else's compute time, I don't want to find after the fact that its provisioning is entangled in some weird corporate ideology that is now impacting my end users, left waiting to be discovered by the unsuspecting victim long after they've made plans around GCP.

(This is a true story from 2 months back, same shit used to happen with App Engine all the time)


That's because Google, as an American company, needs to ensure it doesn't violate certain trade embargoes, trade export law and other sanctions imposed on specific countries, like Iran.

Here is a list of likely affected territories: https://support.google.com/a/answer/2891389?hl=en

  Google restricts access to some of its business services in certain countries or regions, such as Crimea, Cuba, Iran, North Korea, Sudan, and Syria. If you try to sign in to these services from these countries or regions, the following error appears:
  You appear to be signing in from a country where G Suite accounts are not supported.
  Certain Google services, such as Gmail, might be available in these countries or regions for personal use, but not for business or education use.
Though some of the Google services are available for personal use in those countries, commercial use is restricted. A cloud would count as commercial use, even if it's only to host your own stuff.

GCP is far from the only cloud to suffer from it and many other services impose this too. The Docker Hub does it too for example.


Didn't Iran come off the US embargo list a few months back, with a bunch of media fanfare at the time?


What on earth does that have to do with the quality of their support?


Massive intentionally undocumented gaps in service? That's the antithesis of support


Why would you think there's an intention to hide it?


Support == help desk. Not "features".


This is global politics, not a single corporation. Your complaints are against the completely wrong entity here.


Complain to the US government.


Their customer support for GCP is top notch and they re-organized their entire support structure this year so each account has an account executive, SREs, etc. I've been very pleased the last year I've used GCP after migrating from AWS.

Before the reoganization I'd send in quota updates and they'd take 1-3 days. The last quota bump I did took 7 minutes.


This might be true, but AWS's support is not good. To get good AWS support for when their ELBs aren't working, or when half of your EBS volumes have multi-second IO latency, you need to pay for premium support, which is 10% of your bill.

https://aws.amazon.com/premiumsupport/pricing/


This is the same for GCP. Unless you pay for gold support it's essentially useless.


GCP will be replacing the percentage support cost model with a different system:

https://cloudplatform.googleblog.com/2017/03/reimagining-sup...


This has been my biggest barrier to going anywhere near it. Support (well, the lack of it) for adwords has been utterly shite.

But anytime this comes up on HN there are a slew of comments saying that support is top notch for GCE, so it's worth an evaluation at least.


A few months ago I would've agreed with you. After trying out GCP, interacting with support twice and speaking with a few other companies using the platform I've changed my tune.


I've had overall pretty good experiences in my year on GCP, particularly recently. I posted a question in Slack and one of the engineers on the product PM'd me back within 15 mins to troubleshoot the problem.

If you have to raise a ticket it's a bit more variable, but on the whole I've had every issue resolved satisfactorily.


Looks like you are paying for support - are you on the Platinum or Gold support level?


Gold


This is simply not true. Support for GCP works as expected.


Still tainted from years of abusing adwords and adsense customers.


Google announced yesterday a partnership with Rackspace where they will be supporting GCP. https://techcrunch.com/2017/03/08/rackspace-now-offers-manag...


The killer for me is PITR (point in time recover). A few years ago a colleague triggered and UPDATE without WHERE and PITR saved our sorry asses (on MS SQLServer, a fair product despite my antipathy MS).

Postgres has it, but it is kind of a pain to setup correctly and I love RDS because I don't have to deal with the setup anymore.


> Postgres has it, but it is kind of a pain to setup correctly and I love RDS because I don't have to deal with the setup anymore.

Check out barman from 2ndquadrant, we've been running it in production for 2 years now - setup was crazy easy and the one time I had to do a restore (for the same reason, UPDATE without a WHERE) it was no-fuss to get it done.

Of course, don't let me stop you from using RDS - but there's certainly user friendly backup solutions for Postgres when you're not :)


One of the most annoying things for me about Postgres when I switched from MySQL is that in the former you can't put LIMIT clauses on UPDATE/DELETE statements.

When I'm doing surgical work on a DB, I always, always want to do something like :

- SELECT statement showing me the relevant rows (and rowcount=N)

- UPDATE/DELETE statement with a corresponding limit equal to N+1

It gives nice, easy peace of mind that you aren't accidentally wiping out your whole table because you messed up the logic on your WHERE clause. Of course if you made a mistake, you probably still killed N records, but often that is much much better than killing the whole table (and maybe also downstream tables that are linked with FKEY cascade delete!)


If you are that concerned, put "begin;" before your command, check that the updated/inserted number is reasonable, and then either do a commit or a rollback. That way you also didn't screw up N+1 totally random rows. You can even put a returning clause on the update to see which rows you were affecting.


Or if you can't afford to lock the table in the process, before you run UPDATE {foo} WHERE {bar} you back up the relevant rows into a temporary table with SELECT * INTO foo_backup WHERE {bar}.


Postgres uses MVCC (Multiversion concurrency control), it solves concurrency without locking everything you touch.


None of this should cause a table lock: at best you will just lock the affected rows and at worst you will end up with a predicate lock on the where clause.


It seems to me that the amount of work required to fix the above, in most cases, would be almost equal to the amount of work that would be required if you hadn't used the LIMIT clause.


Sorry it's not clear, but does google cloud sql for postgres have PITR or plan to support it in near future?


From their homepage, the features not available for Beta (because Cloud SQL for PostgreSQL is in Beta, some PostgreSQL features are not yet available):

1) Replication

2) High-availability configuration

3) Point-in-time recovery (PITR)

4) Import/export in CSV format


What's the different between PITR and a volume snapshot restoration?


PITR restores your last full backup (or snapshot) and plays the transaction log until the point in time you set so it can be very precise.

Volume snapshot restoration is atomic and probably faster, but it gets you only the specific point in time where the snapshot was taken.

I guess the worst case in terms of storage usage in both mechanisms are not that different, but PITR should be fully transparent performance-wise.


Excited to see this, but holy shit is it unclear what expected costs could be. It seems like it might be on the cheap/affordable side of things, but it's not obvious. I almost prefer to just use Compose.io because at least the pricing is clear.


The pricing is listed here: https://cloud.google.com/sql/docs/postgres/pricing#pg-pricin...

Sample monthly pricing:

db-f1-micro instance : $7.56 for compute;

10GB HDD : $0.9 for storage;

No network cost if the database instance is talking to a GCE VM in the same region;

Disclaimer: I work on Google Cloud SQL


when ha would cost the same i.e. ~20€ than that would be amazingly cheap. i mean on aws you would pay the same (20 €), but you need to pay 1 year upfront.. and if ha would be like 16 € you would only be a little bit more expensive than a 3 year aws rds offering (1€ per month), which is less a problem since the storage is cheaper and you are way more flexible (canceling and recreating to a bigger instance)


>db-f1-micro

>db-g1-small

>db-n1-highmem-2

>db-n1-standard-8

>D32 Database Instance (16GB RAM)

>Tier D0, D1, D2, D4, D8, D16, D32

What's up with cloud instance naming schemes? I get this is kind of similar to Amazon, but man, I bet these are really unwieldy in conversations esp. if someone is new to a platform.


We know. That's why PostgreSQL pilots hopefully clearer pricing structure where you choose how much CPU and RAM is needed and pay per CPU and GiB of ram. No more `db-nX-<something>-X`.

You'll still see instance size names for a while though. I think D0-D32 are for first generation of CloudSQL (which is MySQL only). db-* are for second generation and match GCE instance names.


> What's up with cloud instance naming schemes?

There's at least 2 performance dimensions for provisioning an instance: vCPUs and memory. Often also GPUs, local SSDs, local HDDs. And then you have multiple generations of hardware, which aren't 100% comparable to each other.

So coming up with a "fully systematic" naming scheme is not really possible without listing all the parameters, and then you lose the benefit of having a name.


I guess it's unclear because the instance, storage and networking are all separately priced? The pricing calculator (https://cloud.google.com/products/calculator/) might help.


There's a per-cpu cost and per-GB memory cost (or specific costs for the shared cpu instances), and then storage and egress pricing.

https://cloud.google.com/sql/docs/postgres/pricing

Perhaps the difficulty of working this out is from a lack of clarity on what those values would be? Or have I missed something bigger?


Is there a reason Google is so slow to compete with AWS? This seems like another example of where they're playing catch-up. Meanwhile, I'm blown away that Node is still just in Beta (with no SLA / deprecation guarantees) when Heroku and AWS have supported it for years.


They're playing catch-up in some areas, but in others (networking, containers, pub/sub, BigQuery, authentication, CLI tools), they're ahead (and AWS is far behind).

Whereas AWS aggressively targets the more traditional enterprise stacks (like running a big single-node RDBMS in the cloud), Google seems to give priority to more modern, decentralized tech.


It seems Google is very interested in making GCP a platform for companies that really need to build cloud-based distributed systems first and foremost. It's a cloud-native IaaS platform. AWS, which I also use extensively FWIW, has always felt more like a platform for porting existing on-premise applications onto. You can build atop of AWS at any stage of the game, but it never quite feels like it's anymore than a bunch of very integrated parts with various degrees of quality.


Yes, exactly.


Using the example I have worked with, BigQuery, it appears Google is building platforms specifically for the cloud. BQ does not use the context of an instance, whereas RedShift is simply a column store hosted on specific VM sizes. It takes time to build new solutions compared to simply moving the hosting location.

I don't expect this is universal across all products, but one example.


The last piece of the puzzle for me to have a full stack alternative to AWS. Thanks Google!

Edit: not a complete end of the world, but it would be really nice (and completely easy/safe) to have the uuid-ossp extension available.


pgcrypto is available which has UUID v4 support.


Yep.I now believe Google is serious about cloud.It is beyond me why it takes Google too long to do this.


First to market does not always win (see Apple). Google can take all the lessons learned from AWS RDS and other players and build better solutions.


I don't think it's an all or nothing game either. I think there is a lot of room for the big three IaaS platforms to coexist comfortably.


Yes! We have been asking for this for a while now when asked for feedback about what we would value. Managed ElasticSearch is next on our wishlist.


Will it be available in the same private network as the rest of our GCE VMs? Last I checked, Google Cloud MySQL runs over the public network, which is a big pain in the ass for access control.


No, it's currently public IP only.


Huge news! This + GKE + the new improved GCE billing setup is killer for most shops.


Could you elaborate why that combination is killer? Is the new billing setup more efficient cost wise? What is the new billing setup?


Promising.

Is Google contributing anything back into Postgres codebase ?

They could have named something different for the product - Cloud SQL ? seriously.


Cloud SQL is Google's counterpart to RDS. They support both Postgres and MySQL.

From what I've heard, Google is running a vanilla version of Postgres. They don't offer any features beyond what mainline Postgres provides (unlike AWS Aurora, which runs modified versions versions of MySQL and Postgres).


> Is Google contributing anything back into Postgres codebase ?

As a general thought, it's probably be a good idea to wait (say) 6 months until this new Cloud SQL for PG offering has matured + their team has more PG experience.

Places often contribute back to upstream projects as they get more involved in relevant Community, and it sounds like they're just starting out now.


Awesome, was the main service needed before being able to consider moving from AWS - let the cloud wars begin!


this is huge! I take my words back - RDS finally has a competitor.


Well, I guess I can try out Google Cloud finally


Does anyone have a good comparison between this, RDS, Aurora, Citus, Heroku PG and some of the other Postgres (and "Postgres-compatible") services?

With so many DBaaS tools available, I'd like to know the best options for things like pricing, availability, features, tooling, monitoring, etc...


I think use-case for Google Cloud SQL, RDS, Heroku, etc is a bit different from Citus and other distributed databases. It seems that Cloud SQL has very limited scalibility (32 processors, 200GB of RAM), so it might not be very good at usecases that your working dataset is in order of terabytes or more. Citus on the other hand has horizontal scalability and you can add more CPU power and RAM by adding another machine to your machines.

If my data were at order of 10GBs, I would choose Cloud SQL, RDS, etc. At order of 100GBs, I would try both Cloud SQL, RDS, etc. and Citus, etc. to see which one fits my usecase. At order of terabytes, I would choose Citus or some other distributed database.

(I'm a former Citus employee and Current Googler in a non-Cloud SQL team)


I feel like this is going to be an increasingly hard question to answer as more and more cloud providers with interchangeable services become available. It's like buying a car. You will run into someone that has used 2 or 3 different providers of the same service, but some one with hands on experience with them all, unlikely.


So this will be far from exhaustive but ...

Both RDS and Heroku are aimed more at more modestly sized database in that you don't really scale beyond the vertical capacity of a single node. The prices are relatively comparable with RDS starts at around $20-30 a month depending on the instance type and Heroku Postgres at $50/month. Both services will get you an HA feature that allows for better uptime through automatic node failover.

Overall Heroku Postgres will probably feel a little like the Heroku platform: a little more polished and a little more "managed", but with fewer knobs to tweak, which can be both good and bad depending on your situation.

It's too early to say how Google's offering will shape up, but it'll likely be in the same vein as these first two. Lack of HA probably means that you should limit your production use of it, although it seems that the team intends to implement that eventually based off the service's documentation and other comments here.

Citus and Aurora come into play when you're looking to scale beyond a single node. Citus Cloud starts at $990 a month so you're not likely to come into it without some non-trivial requirements. Aurora is similar idea.

Citus' killer feature over something like Aurora is that instead of going ahead and forking Postgres wholesale, the product runs as an extension, which means that you're likely to get better compatibility going forward with new Postgres features.

Aurora is "compatible" which means that you'll be able to use psql and get access to common functionality, but are likely to see a divergence in support features. A similar situation is Redshift, which deviated around Postgres 8.0.2 [1], and at this point it's safe to say that it will never catch up.

[1] http://docs.aws.amazon.com/redshift/latest/dg/c_redshift-and...


> Aurora is similar idea.

Aurora isn't substantially more expensive than RDS Postgres. Caveat is that you can't run it on the really tiny instance sizes right now. On an r3.large with no reserve pricing you're looking at ~$200pm, or ~115 with reserve.


What is the maximum disk space for this offering? Amazon's RDS peaks out at just 6TB currently.


10TB.


Wait did this leak before the keynote announcement? :-) (I'm at Google Cloud NEXT right now).


Well, about 10 different googlers have hinted at it on HN threads and elsewhere over the last fortnight.


They've been hinting for months.


No, it was mentioned in the keynote.


lol. I need to get off HN.


Does anyone know if it supports PostGIS? I'm not seeing much on the page (on mobile).


Yes

Disclosure: Not a Google Employee, and I don't work on Cloud SQL


Thanks. It's such a popular extension that I'd be surprised if it wasn't supported.



This is definitely going to be very useful when it's fully fleshed out but be warned that this initial beta doesn't support Google-managed replication or HA (to say nothing of an SLA, of course).


This makes me seriously consider switching from heroku for my rails postgres app. Probably could cut my costs significantly.

Anyone have any comments or experiences with the Google app engine flex environment for ruby?


I'm the Ruby Developer Advocate for GCP. I've used App Engine Flex with Rails 5 on a couple projects recently. Nothing huge but one is in production (it is for a local non-profit). I've found it to be a great way to get things out the door quickly and I love the auto-scaling.

It doesn't have all the magic of App Engine Standard and deploy times are slower than I'd like (~5 minutes). But I'm okay with that if I can use my own database, any library I want, and I have full portability.

I did a blog post with some of the stuff that I learned: http://www.thagomizer.com/blog/2017/03/09/rails-on-app-engin...


Drop me an email (see profile), I'll connect you with a Rubyist at Google who can talk all about Ruby on GCP.


Because my first question was "What are the deviations from standard Postgres?":

https://cloud.google.com/sql/docs/features#differences-pg

It looks like there aren't many – you can't use SUPERUSER, and they enable extensions, options, and parameters one by one at request.


Awesome. Now, can you do the same for Elasticsearch ?


Any chance of seeing pipelinedb being supported?


I've been looking forward to this forever


Hopefully this pushes Microsoft to expand their offering. I'd love managed Postgres in Azure.


Heard they have a very easy setup for SQL Server with 3 instances and high availability.

Don't think that they care (or should care) much about Postgre. One doesn't go to Azure for that.


I heard a closed beta for managed postgres on Azure is currently running


Interesting! Is there any info online about this, or is it all just 'secret squirrel' rumours for now?


The day before I was about to migrate to RDS. Change of plans!


Why change? Because GCloud hype or because cost? I'd stick to AWS RDS because GCloud Postgres is still in beta as they highlighted.


Does this use Vitess? Are those two competing products?


> Vitess is a database clustering system for horizontal scaling of MySQL.

So probably not, since this is about supporting Postgres on Cloud SQL.


Right. Vitess was built by YouTube to solve its own scalability problems. It provides sharding and cluster-management functionality. In a way, it's orthogonal. We've talked about the possibility of making Vitess work on top of CloudSQL. Maybe it will happen some day.


Sounds like that might take a bite out of the market for Spanner :P


Extremely exciting, been waiting for this for a while


It would be awesome to see pglogical in the list of supported extensions, which can help to deal with replication for now.


5 GB database with 2 GB RAM for $132/month (tier D4)?

Seriously, who is this for? I have no idea - SSD VPS like this is about $10/mo ...


That is pricing for 1st Generation, which is a whole different product that does not support Postgres. The pricing for Postgres is here: https://cloud.google.com/sql/docs/postgres/pricing#pg-pricin...

    (1*0.0413+2*0.0070)*730 + 5*0.17 = $41.22/month
      1 vCPU   2GB RAM        5GB SSD


Thanks for the computation - looks a bit more sane, but still - seems pretty expensive compared to a cheap VPS for the one-man-show scale, and quite expensive for 'we need 100s of those' compared with standalone server.


5 GB is included storage, additional is available for $0.24 per GB per month.

Still, this doesn't seem competitive with AWS. The same money buys you substantially more RAM: https://aws.amazon.com/rds/pricing/

Edit: sibling comment points out the actual pricing for google is lower.


$10 + the very non-trivial cost of administering it yourself.


For micro scale self-managed Postgres, what's difference? Keep in mind that GCP doesn't support HA (ye)t and has not guaranteed anything.


Does this support hstore and Json store?


Yes. hstore is an available extension, json is a built-in datatype.

https://cloud.google.com/sql/docs/postgres/extensions


Good news. More competition the better.


This is great news! Finally!


Fantastic news.


is there IOPS info somewhere?


IOPS for GCE Persistent Disk volumes (which back Cloud SQL) scale with volume size: https://cloud.google.com/compute/docs/disks/performance#type...

(For "Local SSD" the IOPS are a function of the number of devices you attach)


I'm not building anything on top of google unless they give EOL time lines and stop pulling the rug out from under services.


They haven't really pulled the rug out for anything on Google Cloud (there've been a few things replaced, e.g., the old Master/Slave datastore on App Engine.)

And while this doesn't have an deprecation policy applicable as a Beta feature, most GA Google Cloud features have a one-year deprecation policy, so even if they were retired, no one would have the rug pulled out.)


They do have a one-year deprecation policy[1] for pretty much all of their GA offerings on GCP[2].

[1]: https://cloud.google.com/terms/

[2]: https://cloud.google.com/terms/deprecation


You seem to be confusing Google Cloud Platform with Google services. AFAIK there have been no GCP features removed after they have gone into General Availability.

Bet hey, keep the uninformed meme going if you like.


You can find our deprecation policy here:

https://cloud.google.com/terms/

Disclaimer: I work on GCP, though not on Cloud SQL.


What exactly has gone away from GCP unexpectedly?


Strongly agree. My perception is that they've got no grit, which stinks because Amazon has it in spades


Can we all stop going on about "grit" yet?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: