2ndQuadrant is now part of EDB

Bringing together some of the world's top PostgreSQL experts.

2ndQuadrant | PostgreSQL
Mission Critical Databases
  • Contact us
  • EN
    • FR
    • IT
    • ES
    • DE
    • PT
  • Support & Services
  • Products
  • Downloads
    • Installers
      • Postgres Installer
      • 2UDA – Unified Data Analytics
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
  • Postgres Learning Center
    • Webinars
      • Upcoming Webinars
      • Webinar Library
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Blog
    • Training
      • Course Catalogue
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
    • Books
      • PostgreSQL 11 Administration Cookbook
      • PostgreSQL 10 Administration Cookbook
      • PostgreSQL High Availability Cookbook – 2nd Edition
      • PostgreSQL 9 Administration Cookbook – 3rd Edition
      • PostgreSQL Server Programming Cookbook – 2nd Edition
      • PostgreSQL 9 Cookbook – Chinese Edition
    • Videos
    • Events
    • PostgreSQL
      • PostgreSQL – History
      • Who uses PostgreSQL?
      • PostgreSQL FAQ
      • PostgreSQL vs MySQL
      • The Business Case for PostgreSQL
      • Security Information
      • Documentation
  • About Us
    • About 2ndQuadrant
    • 2ndQuadrant’s Passion for PostgreSQL
    • News
    • Careers
    • Team Profile
  • Blog
  • Menu Menu
You are here: Home1 / Blog2 / Andrew's PlanetPostgreSQL3 / Managing Freezing in PostgreSQL
Andrew Dunstan

Managing Freezing in PostgreSQL

October 10, 2018/6 Comments/in Andrew's PlanetPostgreSQL, PostgreSQL /by Andrew Dunstan

Postgres contains a moving event horizon, which is in effect about 2 billion transactions ahead of or behind the current transaction id. Transactions up to 2 billion ahead of or more than 2 billion behind the current transaction id are considered to be in the future, and will thus be invisible to current transactions.

Postgres avoids this catastrophic data loss by specially marking old rows so that no matter where they are in relation to the current transaction id they will be visible.

Freezing is this process of marking old live tuples (i.e. database rows) so that they don’t get run over by the moving event horizon that would otherwise make them appear to be in the future. This is in contrast to vacuuming, which is the freeing up of space consumed by old dead tuples that are no longer visible to any transaction.

Both processes are managed by vacuum.

There are a number of settings that govern how freezing is done.

First, vacuum_freeze_min_age governs whether or not a tuple will be frozen while vacuum is already looking at a page to see if it has dead tuples that can be cleaned up. Tuples older than vacuum_freeze_min_age will be frozen in this case. Setting this low means that there will be less work to do later on, but at the possible cost of extra effort both in CPU and IO or WAL activity. Generally you probably want this set to at least a few hours worth of transactions. Let’s say you’re expecting to do up to 2000 transactions per second as a sustained rate. 2000 TPS is 7.2m transactions per hour. Thus a fairly aggressive setting for this case might be say 20m. The default setting is 50m. Similarly for vacuum_multixact_freeze_min_age. Note that the transaction_id and multixid counters are independent – you need to keep track of both of them.

Second, there are vacuum_freeze_table_age and vacuum_multixact_freeze_table_age. These settings govern when autovacuum will not just look at pages that might have dead rows, but any page that might have unfrozen rows. The defaults for these settings are 150m. If you have reduced vacuum_freeze_min_age enough, in many cases this more aggressive vacuum will have little or no work to do. In any case, this process is not as busy as it used to be, since modern versions of Postgres (9.6 and up) keep a map of pages where all the tuples are frozen, and only visit those pages that are not all frozen. That means this is no longer a full table scan.

Last there is autovacuum_freeze_max_age. If the last time the table was scanned completely for unfrozen rows was more than this many transactions ago, autovacuum will start an anti-wraparound vacuum on the table. The default is 200m. Similarly for autovacuum_multixact_freeze_max_age for which the default is 400m. This is something you really want to avoid. There are two things that can be done. First, it is very common to increase these settings to something like 1 billion, to give yourself more headroom, especially on systems that are heavy consumers of transactions. You could make it more but you want to have plenty of transaction space between your oldest tuple and the event horizon. Second, it is important to monitor your systems and take remedial action before any databases run into this. This remedial action often includes manual vacuuming.

One problem that can occur is where you have DDL that causes normal (i.e. not anti-wraparound) autovacuum to cancel itself. If you do this enough eventually you will get an anti-wraparound vacuum forced, and any DDL then queues up behind the vacuum process, and that in turn blocks any further DML. At this stage your table is effectively unreadable until the vacuum finishes. This depends on the usage pattern of your database, but this is not just a theoretical possibility and Postgres deployments and DBAs do need to take it into account.

Monitoring your database cluster is critical to managing this. In particular you need to monitor the datfrozenxid and datminmxid of each database in the cluster, and if these get too old take remedial action before an anti-wraparound vacuum is required. Often the problem is with one or a few tables in the database. Which ones are the problem can be discovered by examining the relfrozenxid and relminmxid of the tables in the database. The age() and mxid_age() functions are useful to discover the age of transaction id and multixid counters respectively.

Freezing is not something you can avoid, it is an essential maintenance activity in Postgres that needs to be actively managed.

Tags: freezing, PostgreSQL
Share this entry
  • Share on Facebook
  • Share on Twitter
  • Share on WhatsApp
  • Share on LinkedIn
6 replies
  1. Raj
    Raj says:
    February 20, 2019 at 7:02 pm

    Hi
    Thanks for detailed explanation and post.

    I have a question, MyDB is 9.2 and running with default values for the following parameter.

    autovacuum_freeze_max_age : 200M
    vacuum_freeze_min_age : 50M
    vacuum_freeze_table_age : 150M

    And i am running vacuuming daily and monitoring Transaction age , Also making sure that no dead tuples/transactions in Tables

    Now,
    Will auto-vacuum run at the age of 200M with “autovacuum: VACUUM schema.table (to prevent wraparound)” in this case ? If so any impact during its run even i don’t have any dead tuples/transaction ?

    How to determine my optimal parameter value ?

    autovacuum_freeze_max_age
    vacuum_freeze_min_age
    vacuum_freeze_table_age

    Thanks & Regards
    Raj

    Reply
    • craig.ringer
      craig.ringer says:
      May 16, 2019 at 1:18 am

      Autovacuum should be progressively freezing tuples as it goes. So you won’t have a giant big freeze run. That only becomes an issue if vacuum isn’t keeping up with normal activity.

      Reply
  2. jup
    jup says:
    March 22, 2020 at 3:44 pm

    “Transactions more than 2 billion ahead of or more than 2 billion behind the current transaction id are considered to be in the future, …” Really? Or shall it read: “Transactions *up to* 2 billion ahead of or more than 2 billion behind the current transaction id are considered to be in the future, … “

    Reply
    • Andrew Dunstan
      Andrew Dunstan says:
      April 1, 2020 at 10:20 am

      Yes, you’re right. I’ll adjust the text. Thanks for noticing.

      Reply
  3. Vivek Singh
    Vivek Singh says:
    August 24, 2020 at 9:53 pm

    autovacuum_max_freeze_age should be autovacuum_freeze_max_age

    Reply
    • Andrew Dunstan
      Andrew Dunstan says:
      May 7, 2021 at 3:05 pm

      fixed, thanks

      Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Get in touch with us!

Recent Posts

  • Random Data December 3, 2020
  • Webinar: COMMIT Without Fear – The Beauty of CAMO [Follow Up] November 13, 2020
  • Full-text search since PostgreSQL 8.3 November 5, 2020
  • Random numbers November 3, 2020
  • Webinar: Best Practices for Bulk Data Loading in PostgreSQL [Follow Up] November 2, 2020

Featured External Blogs

Tomas Vondra's Blog

Our Bloggers

  • Simon Riggs
  • Alvaro Herrera
  • Andrew Dunstan
  • Craig Ringer
  • Francesco Canovai
  • Gabriele Bartolini
  • Giulio Calacoci
  • Ian Barwick
  • Marco Nenciarini
  • Mark Wong
  • Pavan Deolasee
  • Petr Jelinek
  • Shaun Thomas
  • Tomas Vondra
  • Umair Shahid

PostgreSQL Cloud

2QLovesPG 2UDA 9.6 backup Barman BDR Business Continuity community conference database DBA development devops disaster recovery greenplum Hot Standby JSON JSONB logical replication monitoring OmniDB open source Orange performance PG12 pgbarman pglogical PG Phriday postgres Postgres-BDR postgres-xl PostgreSQL PostgreSQL 9.6 PostgreSQL10 PostgreSQL11 PostgreSQL 11 PostgreSQL 11 New Features postgresql repmgr Recovery replication security sql wal webinar webinars

Support & Services

24/7 Production Support

Developer Support

Remote DBA for PostgreSQL

PostgreSQL Database Monitoring

PostgreSQL Health Check

PostgreSQL Performance Tuning

Database Security Audit

Upgrade PostgreSQL

PostgreSQL Migration Assessment

Migrate from Oracle to PostgreSQL

Products

HA Postgres Clusters

Postgres-BDR®

2ndQPostgres

pglogical

repmgr

Barman

Postgres Cloud Manager

SQL Firewall

Postgres-XL

OmniDB

Postgres Installer

2UDA

Postgres Learning Center

Introducing Postgres

Blog

Webinars

Books

Videos

Training

Case Studies

Events

About Us

About 2ndQuadrant

What does 2ndQuadrant Mean?

News

Careers 

Team Profile

© 2ndQuadrant Ltd. All rights reserved. | Privacy Policy
  • Twitter
  • LinkedIn
  • Facebook
  • Youtube
  • Mail
[Video] Introduction to JSON data types in PostgreSQL PG Phriday: Studying Stored Procedures in Postgres 11
Scroll to top
×