2ndQuadrant is now part of EDB

Bringing together some of the world's top PostgreSQL experts.

2ndQuadrant | PostgreSQL
Mission Critical Databases
  • Contact us
  • Support & Services
    • Support
      • 24/7 PostgreSQL Support
      • Developer Support
    • DBA Services
      • Remote DBA
      • Database Monitoring
    • Consulting Services
      • Health Check
      • Performance Tuning
      • Database Security Audit
      • PostgreSQL Upgrade
    • Migration Services
      • Migrate to PostgreSQL
      • Migration Assessment
  • Products
    • Postgres-BDR ®
    • PostgreSQL High Availability
    • Kubernetes Operators for BDR & PostgreSQL
    • Managed PostgreSQL in the Cloud
    • Installers
      • Postgres Installer
      • 2UDA
    • 2ndQPostgres
    • pglogical
    • Barman
    • repmgr
    • OmniDB
    • SQL Firewall
    • Postgres-XL
  • Downloads
    • Installers
      • Postgres Installer
      • 2UDA – Unified Data Analytics
    • Whitepapers
      • Business Case for PostgreSQL Support
      • AlwaysOn Postgres
      • PostgreSQL with High Availability
      • Security Best Practices
      • BDR
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
  • Postgres Learning Center
    • Webinars
      • Upcoming Webinars
      • Webinar Library
    • Whitepapers
      • Business Case for PostgreSQL Support
      • AlwaysOn Postgres
      • PostgreSQL with High Availability
      • Security Best Practices
      • BDR
    • Blog
    • Training
      • Course Catalogue
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
    • Books
      • PostgreSQL 11 Administration Cookbook
      • PostgreSQL 10 Administration Cookbook
      • PostgreSQL High Availability Cookbook – 2nd Edition
      • PostgreSQL 9 Administration Cookbook – 3rd Edition
      • PostgreSQL Server Programming Cookbook – 2nd Edition
      • PostgreSQL 9 Cookbook – Chinese Edition
    • Videos
    • Events
    • PostgreSQL
      • PostgreSQL – History
      • Who uses PostgreSQL?
      • PostgreSQL FAQ
      • PostgreSQL vs MySQL
      • The Business Case for PostgreSQL
      • Security Information
      • Documentation
  • About Us
    • About 2ndQuadrant
    • What Does “2ndQuadrant” Mean?
    • 2ndQuadrant’s Passion for PostgreSQL
    • News
    • Careers
    • Team Profile
  • Blog
  • Menu Menu
You are here: Home1 / Blog2 / Simon's PlanetPostgreSQL3 / The Physics of Multi-Master
Simon Riggs

The Physics of Multi-Master

January 8, 2016/4 Comments/in Simon's PlanetPostgreSQL /by Simon Riggs

If you try to update the same data at the same time in multiple locations, your application has a significant problem, period.

That’s what I call the physics of multi-master.

How that problem manifests itself is really based upon your choice of technology. Choosing Postgres, Oracle or ProblemoDB won’t change the problem, just gives you choices for handling the problem.

If you choose single master, then you get an error because one of the nodes you try to update is read-only and so can’t be updated at all.

If you have multiple masters, then you get to choose between an early abort because of serialization errors, or a later difficulty when conflict resolution kicks in. Eager serialization causes massive delays on every transaction, even ones that have no conflict problems. A better, more performant way is to resolve any conflicts later, taking an optimistic approach that gives you no problems if you have no conflicts. That is why BDR supports post-commit conflict resolution semantics.

Or if you use a shared cache system like Oracle RAC then you get significant performance degradation as the data blocks ping around the cluster. The serialization is enforced by block-level locks.

There isn’t any way around this. You either get an ERROR, or you get some other hassle.

So BDR and multi-master isn’t supposed to be a magic bullet, its an option you can take advantage of for carefully designed applications. Details on BDR

Now some developers reading this will go “Man, I’m never touching that.” One thing to remember is that single master means one node, in one place in the world. Nice response times if you’re sitting right next to it, but what happens when you’re the other side of the planet? Will all the people using your application wait while you access your nice simple application?

Physics imposes limitations and we need database solutions to work around them.

Tags: BDR, Postgres-BDR
Share this entry
  • Share on Facebook
  • Share on Twitter
  • Share on WhatsApp
  • Share on LinkedIn
4 replies
  1. Jakub Wartak
    Jakub Wartak says:
    January 9, 2016 at 6:08 pm

    “Or if you use a shared cache system like Oracle RAC then you get significant performance degradation as the data blocks ping around the cluster. The serialization is enforced by block-level locks.”

    I was actually always very curious why Postgres community doesn’t develop just like that for OLTP – shared everything with SAN (i know the answer that it won’t happen, talked with one of your guys – Petr – about it).

    I’m mostly Oracle guy, working with RAC systems and really the performance of that solution WHEN DONE PROPERLY isn’t that bad as you saying – actually it’s freaking AWESOME. RAC is master piece of engineering. It is simply not true in properly tuned system/app where you avoid those “block ping-pongs”/hotspots between nodes.

    Historically it was bad when RAC was called Oracle Parallel Servers somewhere starting in 90s and Oracle did not have Cache Fusion at that time. Now it has plenty of stuff that solve all those problems since 2004 at least: hash (sub)partitioned tables/indexes, REVERSE indexes, DB services to “partition the load” through listeners, cached no-ordeded sequences, Cache Fusion(via dedicated interconnects ethernet or IB ones, max two way/three way block transfers even in clusters of >=4 nodes), Dynamic Block Remastering, Past Images, avoiding disk pings (IO) in case of remastering dirty blocks, caching of previous versions of block for read consistency, handling custom small block (e.g. 2kB) tablespaces, tricks with minimize_rows_per_block/PCTFREE… and best of all that is nearly completely transparent to applications and rest of Oracle technologies without the need to worry about what can happen with write-write conflicts between nodes because part of RAC – GES – Global Enqueue Services – solves global cluster locking for you… It was more than 10 years of journey…

    Reply
    • Simon Riggs
      Simon Riggs says:
      January 9, 2016 at 6:34 pm

      “When done properly”. I’ve had experience with OPS..

      Reply
    • Matt
      Matt says:
      January 15, 2016 at 4:24 am

      A shared nothing approach is the only thing that does not wake a DBA up at night. Shared resources made sense when fast and reliable data storage was expensive, but now local storage cost & capacity advances have made all of that overhead, complexity budget, and shared failure modes a liability. A bad SAN firmware update can take down an entire SAN. I would not put anything on a SAN without file system or application level checksums.

      The only proper implementation is redundant paths from end point to end point, with nothing shared in-between. For example (with a different technology), if you want a reliable TCP connection, consider dual homing everything with MultiPath TCP.

      Reply
  2. Jim Nasby
    Jim Nasby says:
    January 15, 2016 at 9:54 pm

    “So BDR and multi-master isn’t supposed to be a magic bullet, its an option you can take advantage of for carefully designed applications.”

    We need to make “there’s no such thing as a magic bullet” t-shirts.

    An interesting look into running into the laws of physics is the Google Spanner whitepaper (http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/spanner-osdi2012.pdf). Long-story short, they installed multiple GPS receivers and actual atomic clocks in a bunch of their data centers to try and build a global distributed multi-master system… that apparently maxes out at several thousand TPS. I know 2nd Quadrant and others have pushed Postgres well past 10x that level.

    To be fair, Google is running their largest money maker (AdSense) on Spanner, so clearly it works well enough for what they need. What I find most interesting about it is TrueTime, a time data type that accounts for actual clock drift. I wrote up about it at http://bluetreble.com/2015/10/time-travel/.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Get in touch with us!

Recent Posts

  • Random Data
  • Webinar: COMMIT Without Fear – The Beauty of CAMO [Follow Up]
  • Full-text search since PostgreSQL 8.3
  • Random numbers
  • Webinar: Best Practices for Bulk Data Loading in PostgreSQL [Follow Up]

Featured External Blogs

Tomas Vondra's Blog

Our Bloggers

  • Simon Riggs
  • Alvaro Herrera
  • Andrew Dunstan
  • Craig Ringer
  • Francesco Canovai
  • Gabriele Bartolini
  • Giulio Calacoci
  • Ian Barwick
  • Marco Nenciarini
  • Mark Wong
  • Pavan Deolasee
  • Petr Jelinek
  • Shaun Thomas
  • Tomas Vondra
  • Umair Shahid

PostgreSQL Cloud

2QLovesPG 2UDA 9.6 backup Barman BDR Business Continuity community conference database DBA development devops disaster recovery greenplum Hot Standby JSON JSONB logical replication monitoring OmniDB open source Orange performance PG12 pgbarman pglogical PG Phriday postgres Postgres-BDR postgres-xl PostgreSQL PostgreSQL 9.6 PostgreSQL10 PostgreSQL11 PostgreSQL 11 PostgreSQL 11 New Features postgresql repmgr Recovery replication security sql wal webinar webinars

Support & Services

24/7 Production Support

Developer Support

Remote DBA for PostgreSQL

PostgreSQL Database Monitoring

PostgreSQL Health Check

PostgreSQL Performance Tuning

Database Security Audit

Upgrade PostgreSQL

PostgreSQL Migration Assessment

Migrate from Oracle to PostgreSQL

Products

HA Postgres Clusters

Postgres-BDR®

2ndQPostgres

pglogical

repmgr

Barman

Postgres Cloud Manager

SQL Firewall

Postgres-XL

OmniDB

Postgres Installer

2UDA

Postgres Learning Center

Introducing Postgres

Blog

Webinars

Books

Videos

Training

Case Studies

Events

About Us

About 2ndQuadrant

What does 2ndQuadrant Mean?

News

Careers 

Team Profile

© 2ndQuadrant Ltd. All rights reserved. | Privacy Policy
  • Twitter
  • LinkedIn
  • Facebook
  • Youtube
  • Mail
Performance of Sequences and Serials in Postgres-XL On pglogical performance
Scroll to top
×