2ndQuadrant is now part of EDB

Bringing together some of the world's top PostgreSQL experts.

2ndQuadrant | PostgreSQL
Mission Critical Databases
  • Contact us
  • Support & Services
    • Support
      • 24/7 PostgreSQL Support
      • Developer Support
    • DBA Services
      • Remote DBA
      • Database Monitoring
    • Consulting Services
      • Health Check
      • Performance Tuning
      • Database Security Audit
      • PostgreSQL Upgrade
    • Migration Services
      • Migrate to PostgreSQL
      • Migration Assessment
  • Products
    • Postgres-BDR ®
    • PostgreSQL High Availability
    • Kubernetes Operators for BDR & PostgreSQL
    • Managed PostgreSQL in the Cloud
    • Installers
      • Postgres Installer
      • 2UDA
    • 2ndQPostgres
    • pglogical
    • Barman
    • repmgr
    • OmniDB
    • SQL Firewall
    • Postgres-XL
  • Downloads
    • Installers
      • Postgres Installer
      • 2UDA – Unified Data Analytics
    • Whitepapers
      • Business Case for PostgreSQL Support
      • AlwaysOn Postgres
      • PostgreSQL with High Availability
      • Security Best Practices
      • BDR
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
  • Postgres Learning Center
    • Webinars
      • Upcoming Webinars
      • Webinar Library
    • Whitepapers
      • Business Case for PostgreSQL Support
      • AlwaysOn Postgres
      • PostgreSQL with High Availability
      • Security Best Practices
      • BDR
    • Blog
    • Training
      • Course Catalogue
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
    • Books
      • PostgreSQL 11 Administration Cookbook
      • PostgreSQL 10 Administration Cookbook
      • PostgreSQL High Availability Cookbook – 2nd Edition
      • PostgreSQL 9 Administration Cookbook – 3rd Edition
      • PostgreSQL Server Programming Cookbook – 2nd Edition
      • PostgreSQL 9 Cookbook – Chinese Edition
    • Videos
    • Events
    • PostgreSQL
      • PostgreSQL – History
      • Who uses PostgreSQL?
      • PostgreSQL FAQ
      • PostgreSQL vs MySQL
      • The Business Case for PostgreSQL
      • Security Information
      • Documentation
  • About Us
    • About 2ndQuadrant
    • What Does “2ndQuadrant” Mean?
    • 2ndQuadrant’s Passion for PostgreSQL
    • News
    • Careers
    • Team Profile
  • Blog
  • Menu Menu
You are here: Home1 / Blog2 / PostgreSQL3 / PostgreSQL 134 / 7 Best Practice Tips for PostgreSQL Bulk Data Loading
7 Best Practice Tips for PostgreSQL Bulk Data Loading
Sadequl Hussain

7 Best Practice Tips for PostgreSQL Bulk Data Loading

September 15, 2020/2 Comments/in PostgreSQL, PostgreSQL 13, Sadeq's PlanetPostgreSQL /by Sadequl Hussain

Sometimes, PostgreSQL databases need to import large quantities of data in a single or a minimal number of steps. This is commonly known as bulk data import where the data source is typically one or more large files. This process can be sometimes unacceptably slow.

There are many reasons for such poor performance: indexes, triggers, foreign keys, GUID primary keys, or even the Write Ahead Log (WAL) can all cause delays.

In this article, we will cover some best practice tips for bulk importing data into PostgreSQL databases. However, there may be situations where none of these tips will be an efficient solution. We recommend readers consider the pros and cons of any method before applying it.

Tip 1: Change Target Table to Un-logged Mode

For PostgreSQL 9.5 and above, the target table can be first altered to UNLOGGED, then altered back to LOGGED once the data is loaded:

ALTER TABLE <target table> SET UNLOGGED
<bulk data insert operations…>
ALTER TABLE <target table> LOGGED

The UNLOGGED mode ensures PostgreSQL is not sending table write operations to the Write Ahead Log (WAL). This can make the load process significantly fast. However, since the operations are not logged, data cannot be recovered if there is a crash or unclean server shutdown during the load. PostgreSQL will automatically truncate any unlogged table once it restarts. 

Also, unlogged tables are not replicated to standby servers. In such cases, existing replications have to be removed before the load and recreated after the load. Depending on the volume of data in the primary node and the number of standbys, the time for recreating replication may be quite long, and not acceptable by high-availability requirements.

We recommend the following best practices for bulk inserting data into un-logged tables:

  • Making a backup of the table and data before altering it to an un-logged mode
  • Recreating any replication to standby servers once data load is complete
  • Using un-logged bulk inserts for tables which can be easily repopulated (e.g. large lookup tables or dimension tables)

Tip 2: Drop and Recreate Indexes

Existing indexes can cause significant delays during bulk data inserts. This is because as each row is added, the corresponding index entry has to be updated as well.

We recommend dropping indexes in the target table where possible before starting the bulk insert, and recreating the indexes once the load is complete. Again, creating indexes on large tables can be time-consuming, but it will be generally faster than updating the indexes during load.

DROP INDEX <index_name1>, <index_name2> … <index_name_n>
<bulk data insert operations…>
CREATE INDEX <index_name> ON <target_table>(column1, …,column n)

It may be worthwhile to temporarily increase the maintenance_work_mem configuration parameter just before creating the indexes. The increased working memory can help create the indexes faster.

Another option to play safe is to make a copy of the target table in the same database with existing data and indexes. This newly copied table can be then tested with bulk insert for both scenarios: drop-and-recreate indexes, or dynamically updating them. The method that yields better performance can be then followed for the live table.

Tip 3: Drop and Recreate Foreign Keys

Like indexes, foreign key constraints can also impact bulk load performance. This is because each foreign key in each inserted row has to be checked for the existence of a corresponding primary key. Behind-the-scene, PostgreSQL uses a trigger to perform the checking. When loading a large number of rows, this trigger has to be fired off for each row, adding to the overhead.

Unless restricted by business rules, we recommend dropping all foreign keys from the target table, loading the data in a single transaction, then recreating the foreign keys after committing the transaction.

ALTER TABLE <target_table> 
DROP CONSTRAINT <foreign_key_constraint>

BEGIN TRANSACTION
<bulk data insert operations…>
COMMIT

ALTER TABLE <target_table> 
ADD CONSTRAINT <foreign key constraint>  
FOREIGN KEY (<foreign_key_field>) 
REFERENCES <parent_table>(<primary key field>)...

Once again, increasing the maintenance_work_mem configuration parameter can improve the performance of recreating foreign key constraints.

Tip 4: Disable Triggers

INSERT or DELETE triggers (if the load process also involves deleting records from the target table) can cause delays in bulk data loading. This is because each trigger will have logic that needs to be checked and operations that need to complete right after each row is INSERTed or DELETEd. 

We recommend disabling all triggers in the target table before bulk loading data and enabling them after the load is finished. Disabling ALL triggers also include system triggers that enforce foreign key constraint checks.

ALTER TABLE <target table> DISABLE TRIGGER ALL
<bulk data insert operations…>
ALTER TABLE <target table> ENABLE TRIGGER ALL

Tip 5: Use COPY Command

We recommend using the PostgreSQL COPY command to load data from one or more files. COPY is optimized for bulk data loads. It’s more efficient than running a large number of INSERT statements or even multi-valued INSERTS.

COPY <target table> [( column1>, … , <column_n>)]
FROM  '<file_name_and_path>' 
WITH  (<option1>, <option2>, … , <option_n>)

Other benefits of using COPY include:

  • It supports both text and binary file import
  • It’s transactional in nature
  • It allows specifying the structure of the input files
  • It can conditionally load data using a WHERE clause

Tip 6: Use Multi-valued INSERT

Running several thousand or several hundreds of thousands of INSERT statements can be a poor choice for bulk data load. That’s because each individual INSERT command has to be parsed and prepared by the query optimizer, go through all the constraint checking, run as a separate transaction, and logged in the WAL. Using a multi-valued single INSERT statement can save this overhead.

INSERT INTO <target_table> (<column1>, <column2>, …, <column_n>) 
VALUES 
(<value a>, <value b>, …, <value x>),
(<value 1>, <value 2>, …, <value n>),
(<value A>, <value B>, …, <value Z>),
(<value i>, <value ii>, …, <value L>),
...

Multi-valued INSERT performance is affected by existing indexes. We recommend dropping the indexes before running the command and recreating the indexes afterwards. 

Another area to be aware of is the amount of memory available to PostgreSQL for running multi-valued INSERTs. When a multi-valued INSERT is run, a large number of input values has to fit in the RAM, and unless there is sufficient memory available, the process may fail.

We recommend setting the effective_cache_size parameter to 50%, and shared_buffer parameter to 25% of the machine’s total RAM. Also, to be safe, it running a series of multi-valued INSERTs with each statement having values for 1000 rows.

Tip 7: Run ANALYZE

This is not related to improving bulk data import performance, but we strongly recommend running the ANALYZE command on the target table immediately after the bulk import. A large number of new rows will significantly skew the data distribution in columns and will cause any existing statistics on the table to be out-of-date. When the query optimizer uses stale statistics, query performance can be unacceptably poor. Running the ANALYZE command will ensure any existing statistics are updated.

Final Thoughts

Bulk data import may not happen every day for a database application, but there’s a performance impact on queries when it runs. That’s why it’s necessary to minimize load time as best as possible. One thing DBAs can do to minimize any surprise is to test the load optimizations in a development or staging environment with similar server specifications and PostgreSQL configurations. Every data load scenario is different, and it’s best to try out each method and find the one that works.

Tags: best practice, bulk insert, data import, data load, postgres, PostgreSQL
Share this entry
  • Share on Facebook
  • Share on Twitter
  • Share on WhatsApp
  • Share on LinkedIn
2 replies
  1. Seamus Abshere
    Seamus Abshere says:
    September 16, 2020 at 4:04 pm

    You should check out http://www.dbcrossbar.org/ for loading data into Postgres. It’s written in Rust and considerably faster than https://pgloader.io/

    Reply
  2. Filip
    Filip says:
    September 28, 2020 at 7:46 pm

    Tip 8: check out Skyvia web-based tool for PostgreSQL data loading: https://skyvia.com/connectors/postgresql

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Get in touch with us!

Recent Posts

  • Random Data December 3, 2020
  • Webinar: COMMIT Without Fear – The Beauty of CAMO [Follow Up] November 13, 2020
  • Full-text search since PostgreSQL 8.3 November 5, 2020
  • Random numbers November 3, 2020
  • Webinar: Best Practices for Bulk Data Loading in PostgreSQL [Follow Up] November 2, 2020

Featured External Blogs

Tomas Vondra's Blog

Our Bloggers

  • Simon Riggs
  • Alvaro Herrera
  • Andrew Dunstan
  • Craig Ringer
  • Francesco Canovai
  • Gabriele Bartolini
  • Giulio Calacoci
  • Ian Barwick
  • Marco Nenciarini
  • Mark Wong
  • Pavan Deolasee
  • Petr Jelinek
  • Shaun Thomas
  • Tomas Vondra
  • Umair Shahid

PostgreSQL Cloud

2QLovesPG 2UDA 9.6 backup Barman BDR Business Continuity community conference database DBA development devops disaster recovery greenplum Hot Standby JSON JSONB logical replication monitoring OmniDB open source Orange performance PG12 pgbarman pglogical PG Phriday postgres Postgres-BDR postgres-xl PostgreSQL PostgreSQL 9.6 PostgreSQL10 PostgreSQL11 PostgreSQL 11 PostgreSQL 11 New Features postgresql repmgr Recovery replication security sql wal webinar webinars

Support & Services

24/7 Production Support

Developer Support

Remote DBA for PostgreSQL

PostgreSQL Database Monitoring

PostgreSQL Health Check

PostgreSQL Performance Tuning

Database Security Audit

Upgrade PostgreSQL

PostgreSQL Migration Assessment

Migrate from Oracle to PostgreSQL

Products

HA Postgres Clusters

Postgres-BDR®

2ndQPostgres

pglogical

repmgr

Barman

Postgres Cloud Manager

SQL Firewall

Postgres-XL

OmniDB

Postgres Installer

2UDA

Postgres Learning Center

Introducing Postgres

Blog

Webinars

Books

Videos

Training

Case Studies

Events

About Us

About 2ndQuadrant

What does 2ndQuadrant Mean?

News

Careers 

Team Profile

© 2ndQuadrant Ltd. All rights reserved. | Privacy Policy
  • Twitter
  • LinkedIn
  • Facebook
  • Youtube
  • Mail
PG Phriday: 10 Things Postgres Could Improve – Part 3 PostgreSQL 13: LIMIT … WITH TIES PostgreSQL 13: LIMIT … WITH TIES
Scroll to top
×