PostgreSQL Maximum Table Size
Various limits on the PostgreSQL database are listed here: http://www.postgresql.org/about/
One of those limits is the Maximum Table Size, listed as 32TB. It’s been that way for many years now.
Only problem is that it has always been wrong, slightly. And now its got much, much bigger.
The table size is limited by the maximum number of blocks in a table, which is 2^32 blocks. The default block size is 8192 bytes, hence the default limit was 32TB as listed. That was wrong in a two ways, because PostgreSQL has always had a configurable block size which allows up to 32768 bytes per block, which would give a maximum size of 128TB. Only problem is that requires an unload/reload to change the block size, so the effective limit per table was 32TB as advertized.
PostgreSQL has always supported Table Inheritance, which has been used in some cases to implement something similar to Table Partitioning in other databases. The big blocker there was that this wasn’t handled well in the Optimizer, so wasn’t easily usable. PostgreSQL’s initial attempt at that was by myself in PostgreSQL 8.1 in 2005, where we introduced constraint_exclusion, though by my own admission that needed more major work. Various tweaks helped, but didn’t change the game significantly. Major work came in the form of two failed attempts to add Partitioning, the first one using Rules and the second one using Triggers, neither of which was very practical. Luckily the next attempt was some years in the planning and started from a good design before it was written, leading to a successsful implementation of Declarative Partitioning in PostgreSQL 10. There is still work to be done and I’m pleased to say it looks like many of the issues will be solved in PostgreSQL 11, with much work contributed by a great many folk from the big names in PostgreSQL new feature development: 2ndQuadrant, EnterpriseDB, NTT Data (listed alphabetically).
Oh yeah, maximum table size. Partitioning theoretically allows you to have one big table made up of many smaller tables. We store that counter in a 32-bit field, so we ought to be able to store lots of data.
My colleague David Rowley found a bug that has existed for >22 years in PostgreSQL that accidentally limited the number of subtables to a 16-bit value, limiting us in PostgreSQL 10 to only 65535 subtables, or 2048 Petabytes. Fixed in PostgreSQL 11, so now we can go to the full 2^32 subtables, each of size 2^32 * 8192 bytes. Wahoo!
So the maximum size of tables in PostgreSQL is
- 32 Terabytes (32TB) in PostgreSQL 9.6 or earlier
- 2 Exabytes (2EB) in PostgreSQL 10 – 2048PB (Petabytes) or 2 million TB
- 0.137 Lottabytes in PostgreSQL 11 – math brain explosion…
Hope that’s big enough for you all?
Is there any performance benefit to tinkering with the block size, excluding the maximum table size? Block size isn’t something I ever considered beyond partition alignment at the file system level.
Accidentially, maximum size of single table could be much lesser than 32 TB:
Toast table for row may contain toasts only for 2^32 values. If table has 300 toastable fields, and each value of those fields is 30 bytes, then there could be no more than 15M rows. These rows will occupy no more than 100 GB in main table, and 350 GB in toast table.
(Note, I didn’t check it. I thought limit would be near TB before calculated today).
Therefore, table partitioning could be unavoidable in some corner cases.
(Other toast gotcha: it consumes oid. That is why new table in database can have eight numbered oid. And oid is protected with global lock.)
Excuse me if I am mistaken somewhere.
Hi Simon, you can also use partitioning for subtables, so the limits are much higher 😉
(but maybe the catalog will explode sooner with such a lot of tables …)
I think you did make a math explosion…, should it not be 128 ZettaBytes in stead of YottaBytes, with the 8192 bytes block. You could make that 512 ZettaBytes for the 32768 bytes block.
32 TB * 2^32 = 137,438,953,472 TB = 137.44 ZB (Zettabytes) = 0.13744 YB (Yottabytes)
Exactly: 128 Zebibyte [ZiB] or 0.125 Yobibyte [YiB] (factor 1024 and not 1000)
Let’s say: 128 Zettabytes 😉
Will Oracle once build a Zettadata or a Yottadata database machine? 😉 Noone would buy it, because of the per core licensing policy!
For practical purposes, and in particular on Amazon Aurora PostgreSQL — what is the max table size?
They bumped the DB size to 128Tb according to https://aws.amazon.com/about-aws/whats-new/2020/09/amazon-aurora-increases-maximum-storage-size-128tb/
But what about per table?
Finally and most critically: does this number include indexes, or are indexes considered separate from the table as far as max size is concerned?