2ndQuadrant is now part of EDB

Bringing together some of the world's top PostgreSQL experts.

2ndQuadrant | PostgreSQL
Mission Critical Databases
  • Contact us
  • EN
    • FR
    • IT
    • ES
    • DE
    • PT
  • Support & Services
  • Products
  • Downloads
    • Installers
      • Postgres Installer
      • 2UDA – Unified Data Analytics
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
  • Postgres Learning Center
    • Webinars
      • Upcoming Webinars
      • Webinar Library
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Blog
    • Training
      • Course Catalogue
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
    • Books
      • PostgreSQL 11 Administration Cookbook
      • PostgreSQL 10 Administration Cookbook
      • PostgreSQL High Availability Cookbook – 2nd Edition
      • PostgreSQL 9 Administration Cookbook – 3rd Edition
      • PostgreSQL Server Programming Cookbook – 2nd Edition
      • PostgreSQL 9 Cookbook – Chinese Edition
    • Videos
    • Events
    • PostgreSQL
      • PostgreSQL – History
      • Who uses PostgreSQL?
      • PostgreSQL FAQ
      • PostgreSQL vs MySQL
      • The Business Case for PostgreSQL
      • Security Information
      • Documentation
  • About Us
    • About 2ndQuadrant
    • 2ndQuadrant’s Passion for PostgreSQL
    • News
    • Careers
    • Team Profile
  • Blog
  • Menu Menu
You are here: Home1 / Blog2 / 2ndQuadrant3 / How long does it take to change your mind?
Simon Riggs

How long does it take to change your mind?

November 3, 2017/4 Comments/in 2ndQuadrant, Simon's PlanetPostgreSQL /by Simon Riggs

You’re clever, which means you’re mostly right about things. But everybody is wrong sometime, so how long does it take for you to change your mind?

People don’t often change their minds quickly. A snap answer is always whatever you were thinking currently. If it was a No, you say No. If it was a Yes, you say Yes. If you answer too quickly you can’t possibly have taken in what was being said to you.

Can you change your mind?
“When the facts change, I change my mind. What do you do, sir?”, often misattributed to John Maynard Keynes.
http://quoteinvestigator.com/2011/07/22/keynes-change-mind/

For Humans, changing your mind based on new information takes days or months. If you have emotional objections, it can take months or years. If anyone has research on that, I’d be interested to see it – the above numbers are really just observation of how us humans behave.

So PostgreSQL adoption has been slow, but it builds over time. Many former users of Oracle or MySQL or other databases now use and accept the PostgreSQL database and I welcome them, as a former Oracle DBA myself.

PostgreSQL itself never changes its mind. Once we COMMIT, that data is there until the next UPDATE/DELETE.

We can’t simply UNCOMMIT a transaction because later transactions depend upon it. In PostgreSQL we literally can’t undo a transaction because we don’t record that information – we use a redo-only transaction log manager.

Someone suggested UNVACUUM to me once. I laughed because my mind was set. It’s only years later that I think about what the user requirement actually was, rather than the specific design proposal used to describe that requirement

PostgreSQL supports Point In Time Recovery, which I wrote back in 2004. Without scripting that can take an effort to recover previous database states.

Which is why I’m thinking about how to do in-database-rollback. The best way seems to be to implement Historical Query and that is something I’ll be working on in 2018. Unless I change my mind.

Share this entry
  • Share on Facebook
  • Share on Twitter
  • Share on WhatsApp
  • Share on LinkedIn
4 replies
  1. Jakub Wartak
    Jakub Wartak says:
    November 3, 2017 at 10:34 am

    Simon, are you confident that such functionality is really needed in PgSQL and won’t be lost development effort?

    I’ve very rarely seen SELECT AS OF $time (Oracle’s Flashback Query) used even in emergency although in theory it’s a nice thing. Perhaps the reason is that important DBs are nearly always locked-down, audited and backed up so there’s not a lot of requests to see how data looked like some time ago. I’ve used Flashback Transaction Back-out, it’s doable, but in real operations it is risky and complex madness especially when operating on distributed data sets (e.g. microservices).

    Just hinting that the ones used and giving more confidence in the solution and in crisis situations are actually much more simple things like “undrop table” (from trash – just renaming instead of dropping), simple views with PITR progress data, easy whole DB rewind-to-past-time options (which could be achieved outside PostgreSQL using snaps).

    Reply
  2. Luca Veronese
    Luca Veronese says:
    November 3, 2017 at 9:55 pm

    On first sight it seems to me that Historical Query is different from in-database-transaction-rollback. While the first one can be accomplished by keeping the expired tuples for a configurable amount of time without vacuuming them, the second one is much more complex to attain. First you need to identify all tuples that have been touched by a given transaction, then you have to create an undo transaction that reverts the effects of the previous one. And this can only be done if the soon to be reverted tuples have not been subsumed by new versions of them or have been deleted. But this is not sufficient because a transaction is not only represented by the set of tuples it has changed but also by the set of tuples it has read to make that change. Simply reverting the changes at a later time could violate database or application level constraints. I think this is a problem that can’t easily be solved in a generic fashion, without application level support.

    Reply
  3. Mohammad Alhashash
    Mohammad Alhashash says:
    November 4, 2017 at 9:14 am

    Could this feature be implemented using the same mechanism as pg_rewind; By using full_page_writes stored in xlog?

    Reply
  4. Shaun Thomas
    Shaun Thomas says:
    November 9, 2017 at 6:54 pm

    I’ve actually thought about this before over the years. My mental model says we could do it by having a GUC that lets you define a period of transaction lag that gives you a rolling window that VACUUM will ignore. So long as that duration hasn’t elapsed, the old data is still there. Given that’s the case, you’d just need syntax to retrieve it. It’s definitely a interesting thought, and I’d love to see if it bears fruit.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Get in touch with us!

Recent Posts

  • Random Data December 3, 2020
  • Webinar: COMMIT Without Fear – The Beauty of CAMO [Follow Up] November 13, 2020
  • Full-text search since PostgreSQL 8.3 November 5, 2020
  • Random numbers November 3, 2020
  • Webinar: Best Practices for Bulk Data Loading in PostgreSQL [Follow Up] November 2, 2020

Featured External Blogs

Tomas Vondra's Blog

Our Bloggers

  • Simon Riggs
  • Alvaro Herrera
  • Andrew Dunstan
  • Craig Ringer
  • Francesco Canovai
  • Gabriele Bartolini
  • Giulio Calacoci
  • Ian Barwick
  • Marco Nenciarini
  • Mark Wong
  • Pavan Deolasee
  • Petr Jelinek
  • Shaun Thomas
  • Tomas Vondra
  • Umair Shahid

PostgreSQL Cloud

2QLovesPG 2UDA 9.6 backup Barman BDR Business Continuity community conference database DBA development devops disaster recovery greenplum Hot Standby JSON JSONB logical replication monitoring OmniDB open source Orange performance PG12 pgbarman pglogical PG Phriday postgres Postgres-BDR postgres-xl PostgreSQL PostgreSQL 9.6 PostgreSQL10 PostgreSQL11 PostgreSQL 11 PostgreSQL 11 New Features postgresql repmgr Recovery replication security sql wal webinar webinars

Support & Services

24/7 Production Support

Developer Support

Remote DBA for PostgreSQL

PostgreSQL Database Monitoring

PostgreSQL Health Check

PostgreSQL Performance Tuning

Database Security Audit

Upgrade PostgreSQL

PostgreSQL Migration Assessment

Migrate from Oracle to PostgreSQL

Products

HA Postgres Clusters

Postgres-BDR®

2ndQPostgres

pglogical

repmgr

Barman

Postgres Cloud Manager

SQL Firewall

Postgres-XL

OmniDB

Postgres Installer

2UDA

Postgres Learning Center

Introducing Postgres

Blog

Webinars

Books

Videos

Training

Case Studies

Events

About Us

About 2ndQuadrant

What does 2ndQuadrant Mean?

News

Careers 

Team Profile

© 2ndQuadrant Ltd. All rights reserved. | Privacy Policy
  • Twitter
  • LinkedIn
  • Facebook
  • Youtube
  • Mail
OmniDB – Now with PL/pgSQL debugger! Here’s How You Can Run OmniDB On Postgres10 [VIDEO]
Scroll to top
×