2ndQuadrant is now part of EDB

Bringing together some of the world's top PostgreSQL experts.

2ndQuadrant | PostgreSQL
Mission Critical Databases
  • Contact us
  • EN
    • FR
    • IT
    • ES
    • DE
    • PT
  • Support & Services
  • Products
  • Downloads
    • Installers
      • Postgres Installer
      • 2UDA – Unified Data Analytics
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
  • Postgres Learning Center
    • Webinars
      • Upcoming Webinars
      • Webinar Library
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Blog
    • Training
      • Course Catalogue
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
    • Books
      • PostgreSQL 11 Administration Cookbook
      • PostgreSQL 10 Administration Cookbook
      • PostgreSQL High Availability Cookbook – 2nd Edition
      • PostgreSQL 9 Administration Cookbook – 3rd Edition
      • PostgreSQL Server Programming Cookbook – 2nd Edition
      • PostgreSQL 9 Cookbook – Chinese Edition
    • Videos
    • Events
    • PostgreSQL
      • PostgreSQL – History
      • Who uses PostgreSQL?
      • PostgreSQL FAQ
      • PostgreSQL vs MySQL
      • The Business Case for PostgreSQL
      • Security Information
      • Documentation
  • About Us
    • About 2ndQuadrant
    • 2ndQuadrant’s Passion for PostgreSQL
    • News
    • Careers
    • Team Profile
  • Blog
  • Menu Menu
You are here: Home1 / Blog2 / Simon's PlanetPostgreSQL3 / PostgreSQL Maximum Table Size
Simon Riggs

PostgreSQL Maximum Table Size

February 20, 2018/9 Comments/in Simon's PlanetPostgreSQL /by Simon Riggs

Various limits on the PostgreSQL database are listed here: http://www.postgresql.org/about/

One of those limits is the Maximum Table Size, listed as 32TB. It’s been that way for many years now.

Only problem is that it has always been wrong, slightly. And now its got much, much bigger.

The table size is limited by the maximum number of blocks in a table, which is 2^32 blocks. The default block size is 8192 bytes, hence the default limit was 32TB as listed. That was wrong in a two ways, because PostgreSQL has always had a configurable block size which allows up to 32768 bytes per block, which would give a maximum size of 128TB. Only problem is that requires an unload/reload to change the block size, so the effective limit per table was 32TB as advertized.

PostgreSQL has always supported Table Inheritance, which has been used in some cases to implement something similar to Table Partitioning in other databases. The big blocker there was that this wasn’t handled well in the Optimizer, so wasn’t easily usable. PostgreSQL’s initial attempt at that was by myself in PostgreSQL 8.1 in 2005, where we introduced constraint_exclusion, though by my own admission that needed more major work. Various tweaks helped, but didn’t change the game significantly. Major work came in the form of two failed attempts to add Partitioning, the first one using Rules and the second one using Triggers, neither of which was very practical. Luckily the next attempt was some years in the planning and started from a good design before it was written, leading to a successsful implementation of Declarative Partitioning in PostgreSQL 10. There is still work to be done and I’m pleased to say it looks like many of the issues will be solved in PostgreSQL 11, with much work contributed by a great many folk from the big names in PostgreSQL new feature development: 2ndQuadrant, EnterpriseDB, NTT Data (listed alphabetically).

Oh yeah, maximum table size. Partitioning theoretically allows you to have one big table made up of many smaller tables. We store that counter in a 32-bit field, so we ought to be able to store lots of data.

My colleague David Rowley found a bug that has existed for >22 years in PostgreSQL that accidentally limited the number of subtables to a 16-bit value, limiting us in PostgreSQL 10 to only 65535 subtables, or 2048 Petabytes. Fixed in PostgreSQL 11, so now we can go to the full 2^32 subtables, each of size 2^32 * 8192 bytes. Wahoo!

So the maximum size of tables in PostgreSQL is

  1. 32 Terabytes (32TB) in PostgreSQL 9.6 or earlier
  2. 2 Exabytes (2EB) in PostgreSQL 10 – 2048PB (Petabytes) or 2 million TB
  3. 0.137 Lottabytes in PostgreSQL 11 – math brain explosion…

Hope that’s big enough for you all?

Share this entry
  • Share on Facebook
  • Share on Twitter
  • Share on WhatsApp
  • Share on LinkedIn
9 replies
  1. akshunj
    akshunj says:
    February 20, 2018 at 11:18 pm

    Is there any performance benefit to tinkering with the block size, excluding the maximum table size? Block size isn’t something I ever considered beyond partition alignment at the file system level.

    Thanks.

    Reply
  2. Sokolov Yura
    Sokolov Yura says:
    February 21, 2018 at 6:15 am

    Accidentially, maximum size of single table could be much lesser than 32 TB:

    Toast table for row may contain toasts only for 2^32 values. If table has 300 toastable fields, and each value of those fields is 30 bytes, then there could be no more than 15M rows. These rows will occupy no more than 100 GB in main table, and 350 GB in toast table.
    (Note, I didn’t check it. I thought limit would be near TB before calculated today).

    Therefore, table partitioning could be unavoidable in some corner cases.

    (Other toast gotcha: it consumes oid. That is why new table in database can have eight numbered oid. And oid is protected with global lock.)

    Excuse me if I am mistaken somewhere.

    Reply
  3. akretschmer
    akretschmer says:
    February 22, 2018 at 9:17 am

    Hi Simon, you can also use partitioning for subtables, so the limits are much higher 😉
    (but maybe the catalog will explode sooner with such a lot of tables …)

    Reply
    • WaltervdSchee
      WaltervdSchee says:
      February 23, 2018 at 12:54 pm

      I think you did make a math explosion…, should it not be 128 ZettaBytes in stead of YottaBytes, with the 8192 bytes block. You could make that 512 ZettaBytes for the 32768 bytes block.

      Reply
      • Simon Riggs
        Simon Riggs says:
        February 23, 2018 at 12:58 pm

        “Big”

        Reply
  4. Willi Rauffer
    Willi Rauffer says:
    September 7, 2018 at 9:14 am

    32 TB * 2^32 = 137,438,953,472 TB = 137.44 ZB (Zettabytes) = 0.13744 YB (Yottabytes)

    Reply
  5. Willi Rauffer
    Willi Rauffer says:
    September 7, 2018 at 9:23 am

    Exactly: 128 Zebibyte [ZiB] or 0.125 Yobibyte [YiB] (factor 1024 and not 1000)

    Reply
  6. Willi Rauffer
    Willi Rauffer says:
    September 7, 2018 at 9:31 am

    Let’s say: 128 Zettabytes 😉
    Will Oracle once build a Zettadata or a Yottadata database machine? 😉 Noone would buy it, because of the per core licensing policy!

    Reply
  7. Konstantin Gredeskoul
    Konstantin Gredeskoul says:
    January 7, 2021 at 2:37 pm

    For practical purposes, and in particular on Amazon Aurora PostgreSQL — what is the max table size?

    They bumped the DB size to 128Tb according to https://aws.amazon.com/about-aws/whats-new/2020/09/amazon-aurora-increases-maximum-storage-size-128tb/

    But what about per table?

    Finally and most critically: does this number include indexes, or are indexes considered separate from the table as far as max size is concerned?

    Reply

Trackbacks & Pingbacks

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Get in touch with us!

Recent Posts

  • Random Data December 3, 2020
  • Webinar: COMMIT Without Fear – The Beauty of CAMO [Follow Up] November 13, 2020
  • Full-text search since PostgreSQL 8.3 November 5, 2020
  • Random numbers November 3, 2020
  • Webinar: Best Practices for Bulk Data Loading in PostgreSQL [Follow Up] November 2, 2020

Featured External Blogs

Tomas Vondra's Blog

Our Bloggers

  • Simon Riggs
  • Alvaro Herrera
  • Andrew Dunstan
  • Craig Ringer
  • Francesco Canovai
  • Gabriele Bartolini
  • Giulio Calacoci
  • Ian Barwick
  • Marco Nenciarini
  • Mark Wong
  • Pavan Deolasee
  • Petr Jelinek
  • Shaun Thomas
  • Tomas Vondra
  • Umair Shahid

PostgreSQL Cloud

2QLovesPG 2UDA 9.6 backup Barman BDR Business Continuity community conference database DBA development devops disaster recovery greenplum Hot Standby JSON JSONB logical replication monitoring OmniDB open source Orange performance PG12 pgbarman pglogical PG Phriday postgres Postgres-BDR postgres-xl PostgreSQL PostgreSQL 9.6 PostgreSQL10 PostgreSQL11 PostgreSQL 11 PostgreSQL 11 New Features postgresql repmgr Recovery replication security sql wal webinar webinars

Support & Services

24/7 Production Support

Developer Support

Remote DBA for PostgreSQL

PostgreSQL Database Monitoring

PostgreSQL Health Check

PostgreSQL Performance Tuning

Database Security Audit

Upgrade PostgreSQL

PostgreSQL Migration Assessment

Migrate from Oracle to PostgreSQL

Products

HA Postgres Clusters

Postgres-BDR®

2ndQPostgres

pglogical

repmgr

Barman

Postgres Cloud Manager

SQL Firewall

Postgres-XL

OmniDB

Postgres Installer

2UDA

Postgres Learning Center

Introducing Postgres

Blog

Webinars

Books

Videos

Training

Case Studies

Events

About Us

About 2ndQuadrant

What does 2ndQuadrant Mean?

News

Careers 

Team Profile

© 2ndQuadrant Ltd. All rights reserved. | Privacy Policy
  • Twitter
  • LinkedIn
  • Facebook
  • Youtube
  • Mail
PostgreSQL 11 – Server-side Procedures (Part 2) Dev Corner: error context stack corruption
Scroll to top
×