Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We had tried to do a logical replication-based strategy for Postgres to achieve no-downtime version upgrades during business hours, but there were so many gotchas and things you have to do on the side (stuff like sequence tracking etc), that we just gave up and took the downtime to do it more safely.

I think Postgres is really cool and has a lot of good reliability, but I've always wondered what people do for high availability. And yeah, pgbouncer being a thing bugs,

It feels like we're getting closer and closer to "vanilla pg works perfectly for the common worklfows" though, so tough to complain too much.



I have written an article about upgrading PSQL with no/low downtime with Logical Replication. (More like a note to my future self)

See if you can understand enough of it and consider doing it again for next upgrade. (I have done that for 12 > 13)


So to be honest I see that article (linked below) and I think "Yeah that's probably all right", but given this is for a Real Sytem With Serious People's Data, when faced with "take the downtime and do the offline data upgrade that _basically everyone does_" and "try to use logical replication to get a new database in the right state, knowing that logical replication doesn't copy _everything_ so I need to manually copy some stuff, and if I miss stuff I'm in a bad situation"...

I trust the pg people that tools do what are advertised, but it's a very high risk proposition


Maybe you could give a link to the post


Whats the link?



This is what happen I don't have enough sleep...

I forgot to post the article link!

Thanks for posting it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: