2ndQuadrant | PostgreSQL
PostgreSQL Solutions for the Enterprise
+39 0574 159 3000
  • Contact Us
  • EN
    • FR
    • IT
    • ES
    • DE
  • Support & Services
    • Support
      • 24/7 PostgreSQL Support
      • Developer Support
      • IBM Z Production Support
    • DBA Services
      • Remote DBA
      • Database Monitoring
    • Consulting Services
      • Health Check
      • Performance Tuning
      • Database Security Audit
      • PostgreSQL Upgrade
      • Kubernetes for Postgres and BDR
    • Migration Services
      • Migrate to PostgreSQL
      • Migration Assessment
  • Products
    • PostgreSQL with High Availability
    • BDR
    • 2ndQPostgres
    • pglogical
      • Installation instruction for pglogical
      • Documentation
    • repmgr
    • Barman
    • Postgres Cloud Manager
    • SQL Firewall
    • Postgres-XL
    • OmniDB
    • Postgres Installer
    • 2UDA
  • Downloads
    • Postgres Installer
    • 2UDA – Unified Data Analytics
  • Postgres Learning Center
    • Webinars
      • You forgot to put the WHERE in DELETE?
      • BDR Overview
    • Whitepapers
      • Highly Available Postgres Clusters
      • AlwaysOn Postgres
      • BDR
      • PostgreSQL Security Best Practices
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • Healthcare Software Solutions (HSS)
        • Navionics
    • Training
      • Training Catalog and Scheduled Courses
        • Advanced Development & Performance
        • Linux for PostgreSQL DBAs
        • BDR
        • PostgreSQL Database Administration
        • PostgreSQL Data Warehousing & Partitioning
        • PostgreSQL for Developers
        • PostgreSQL Immersion
        • PostgreSQL Immersion for Cloud Databases
        • PostgreSQL Security
        • Postgres-XL-10
        • Practical SQL
        • Replication, Backup & Disaster Recovery
        • Introduction to PostgreSQL and Kubernetes
    • Books
      • PostgreSQL 11 Administration Cookbook
      • PostgreSQL 10 Administration Cookbook
      • PostgreSQL High Availability Cookbook – 2nd Edition
      • PostgreSQL 9 Administration Cookbook – 3rd Edition
      • PostgreSQL Server Programming Cookbook – 2nd Edition
      • PostgreSQL 9 Cookbook – Chinese Edition
    • PostgreSQL
      • PostgreSQL – History
      • Who uses PostgreSQL?
      • PostgreSQL FAQ
      • PostgreSQL vs MySQL
      • Business Case for PostgreSQL
      • Security Information
    • Events
    • Blog
  • About Us
    • About 2ndQuadrant
    • What Does “2ndQuadrant” Mean?
    • 2ndQuadrant’s Passion for PostgreSQL
    • Ask Simon
    • News
    • Careers
    • Team Profile
  • Blog
  • Menu
You are here: Home / Blog / Simon's PlanetPostgreSQL / The Physics of Multi-Master
Simon Riggs

The Physics of Multi-Master

January 8, 2016/4 Comments/in Simon's PlanetPostgreSQL /by Simon Riggs

If you try to update the same data at the same time in multiple locations, your application has a significant problem, period.

That’s what I call the physics of multi-master.

How that problem manifests itself is really based upon your choice of technology. Choosing Postgres, Oracle or ProblemoDB won’t change the problem, just gives you choices for handling the problem.

If you choose single master, then you get an error because one of the nodes you try to update is read-only and so can’t be updated at all.

If you have multiple masters, then you get to choose between an early abort because of serialization errors, or a later difficulty when conflict resolution kicks in. Eager serialization causes massive delays on every transaction, even ones that have no conflict problems. A better, more performant way is to resolve any conflicts later, taking an optimistic approach that gives you no problems if you have no conflicts. That is why BDR supports post-commit conflict resolution semantics.

Or if you use a shared cache system like Oracle RAC then you get significant performance degradation as the data blocks ping around the cluster. The serialization is enforced by block-level locks.

There isn’t any way around this. You either get an ERROR, or you get some other hassle.

So BDR and multi-master isn’t supposed to be a magic bullet, its an option you can take advantage of for carefully designed applications. Details on BDR

Now some developers reading this will go “Man, I’m never touching that.” One thing to remember is that single master means one node, in one place in the world. Nice response times if you’re sitting right next to it, but what happens when you’re the other side of the planet? Will all the people using your application wait while you access your nice simple application?

Physics imposes limitations and we need database solutions to work around them.

Tags: BDR, Postgres-BDR
Share this entry
  • Share on Facebook
  • Share on Twitter
  • Share on WhatsApp
  • Share on LinkedIn
4 replies
  1. Jakub Wartak
    Jakub Wartak says:
    January 9, 2016 at 6:08 pm

    “Or if you use a shared cache system like Oracle RAC then you get significant performance degradation as the data blocks ping around the cluster. The serialization is enforced by block-level locks.”

    I was actually always very curious why Postgres community doesn’t develop just like that for OLTP – shared everything with SAN (i know the answer that it won’t happen, talked with one of your guys – Petr – about it).

    I’m mostly Oracle guy, working with RAC systems and really the performance of that solution WHEN DONE PROPERLY isn’t that bad as you saying – actually it’s freaking AWESOME. RAC is master piece of engineering. It is simply not true in properly tuned system/app where you avoid those “block ping-pongs”/hotspots between nodes.

    Historically it was bad when RAC was called Oracle Parallel Servers somewhere starting in 90s and Oracle did not have Cache Fusion at that time. Now it has plenty of stuff that solve all those problems since 2004 at least: hash (sub)partitioned tables/indexes, REVERSE indexes, DB services to “partition the load” through listeners, cached no-ordeded sequences, Cache Fusion(via dedicated interconnects ethernet or IB ones, max two way/three way block transfers even in clusters of >=4 nodes), Dynamic Block Remastering, Past Images, avoiding disk pings (IO) in case of remastering dirty blocks, caching of previous versions of block for read consistency, handling custom small block (e.g. 2kB) tablespaces, tricks with minimize_rows_per_block/PCTFREE… and best of all that is nearly completely transparent to applications and rest of Oracle technologies without the need to worry about what can happen with write-write conflicts between nodes because part of RAC – GES – Global Enqueue Services – solves global cluster locking for you… It was more than 10 years of journey…

    Reply
    • Simon Riggs
      Simon Riggs says:
      January 9, 2016 at 6:34 pm

      “When done properly”. I’ve had experience with OPS..

      Reply
    • Matt
      Matt says:
      January 15, 2016 at 4:24 am

      A shared nothing approach is the only thing that does not wake a DBA up at night. Shared resources made sense when fast and reliable data storage was expensive, but now local storage cost & capacity advances have made all of that overhead, complexity budget, and shared failure modes a liability. A bad SAN firmware update can take down an entire SAN. I would not put anything on a SAN without file system or application level checksums.

      The only proper implementation is redundant paths from end point to end point, with nothing shared in-between. For example (with a different technology), if you want a reliable TCP connection, consider dual homing everything with MultiPath TCP.

      Reply
  2. Jim Nasby
    Jim Nasby says:
    January 15, 2016 at 9:54 pm

    “So BDR and multi-master isn’t supposed to be a magic bullet, its an option you can take advantage of for carefully designed applications.”

    We need to make “there’s no such thing as a magic bullet” t-shirts.

    An interesting look into running into the laws of physics is the Google Spanner whitepaper (http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/spanner-osdi2012.pdf). Long-story short, they installed multiple GPS receivers and actual atomic clocks in a bunch of their data centers to try and build a global distributed multi-master system… that apparently maxes out at several thousand TPS. I know 2nd Quadrant and others have pushed Postgres well past 10x that level.

    To be fair, Google is running their largest money maker (AdSense) on Spanner, so clearly it works well enough for what they need. What I find most interesting about it is TrueTime, a time data type that accounts for actual clock drift. I wrote up about it at http://bluetreble.com/2015/10/time-travel/.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Recent Posts

  • Setting SSL/TLS protocol versions with PostgreSQL 12 November 27, 2019
  • Webinar: Using SSL with PostgreSQL and pgbouncer [Follow Up] November 14, 2019
  • PostgreSQL 12: Implementing K-Nearest Neighbor Space Partitioned Generalized Search Tree Indexes November 5, 2019
  • Webinar: PostgreSQL Partitioning [Follow up] October 28, 2019
  • Postgres-BDR: It is also about fast safe upgrades October 15, 2019

Featured External Blogs

Tomas Vondra's Blog

Our Bloggers

  • Simon Riggs
  • Alvaro Herrera
  • Andrew Dunstan
  • Craig Ringer
  • Francesco Canovai
  • Gabriele Bartolini
  • Giulio Calacoci
  • Ian Barwick
  • Marco Nenciarini
  • Mark Wong
  • Pavan Deolasee
  • Petr Jelinek
  • Shaun Thomas
  • Tomas Vondra
  • Umair Shahid

PostgreSQL Cloud

2QLovesPG 9.6 backup Barman BDR Business Continuity community conference database DBA development devops disaster recovery greenplum Hot Standby JSON JSONB kanban logical decoding logical replication monitoring open source performance PG12 pgbarman pgday pglogical PG Phriday postgres Postgres-BDR postgres-xl PostgreSQL PostgreSQL 9.6 PostgreSQL10 PostgreSQL 11 PostgreSQL11 PostgreSQL 11 New Features postgresql repmgr Recovery release replication sql standby wal webinar
UK +44 (0)870 766 7756

US +1 650 378 1218

Support & Services

24/7 Production Support

Developer Support

Remote DBA for PostgreSQL

PostgreSQL Database Monitoring

PostgreSQL Health Check

PostgreSQL Performance Tuning

Database Security Audit

Upgrade PostgreSQL

PostgreSQL Migration Assessment

Migrate from Oracle to PostgreSQL

Products

HA Postgres Clusters

Postgres-BDR

2ndQPostgres

pglogical

repmgr

Barman

Postgres Cloud Manager

SQL Firewall

Postgres-XL

OmniDB

Postgres Installer

2UDA

Postgres Learning Center

Introducing Postgres

Blog

Webinars

Books

Videos

Training

Case Studies

Events

About Us

About 2ndQuadrant

What does 2ndQuadrant Mean?

News

Careers 

Team Profile

©2001-2019 2ndQuadrant Ltd. All rights reserved | Privacy Policy
  • Twitter
  • LinkedIn
  • Facebook
  • Youtube
  • Mail
Performance of Sequences and Serials in Postgres-XL On pglogical performance
Scroll to top
×