PostgreSQL-based application performance: latency and hidden delays

Goldfields Pipeline, Western Australia

Goldfields Pipeline, by SeanMac (Wikimedia Commons)

If you’re trying to optimise the performance of your PostgreSQL-based application you’re probably focusing on the usual tools: EXPLAIN (BUFFERS, ANALYZE), pg_stat_statements, auto_explain, log_statement_min_duration, etc.

Maybe you’re looking into lock contention with log_lock_waits, monitoring your checkpoint performance, etc too.

But did you think about network latency? Gamers know about network latency, but did you think it mattered for your application server?

Latency matters

Typical client/server round-trip network latencies can range from 0.01ms (localhost) through the ~0.5ms of a switched network, 5ms of WiFi, 20ms of ADSL, 300ms of intercontinental routing, and even more for things like satellite and WWAN links.

A trivial SELECT can take in the order of 0.1ms to execute server-side. A trivial INSERT can take 0.5ms.

Every time your application runs a query it has to wait for the server to respond with success/failure and possibly a result set, query metadata, etc. This incurs at least one network round trip delay.

When you’re working with small, simple queries network latency can be significant relative to the execution time of your queries if your database isn’t on the same host as your application.

Many applictions, particularly ORMs, are very prone to running lots of quite simple queries. For example, if your Hibernate app is fetching an entity with a lazily fetched @OneToMany relationship to 1000 child items it’s probably going to do 1001 queries thanks to the n+1 select problem, if not more. That means it’s probably spending 1000 times your network round trip latency just waiting. You can left join fetch to avoid that… but then you transfer the parent entity 1000 times in the join and have to deduplicate it.

Similarly, if you’re populating the database from an ORM, you’re probably doing hundreds of thousands of trivial INSERTs… and waiting after each and every one for the server to confirm it’s OK.

It’s easy to try to focus on query execution time and try to optimise that, but there’s only so much you can do with a trivial INSERT INTO ...VALUES .... Drop some indexes and constraints, make sure it’s batched into a transaction, and you’re pretty much done.

What about getting rid of all the network waits? Even on a LAN they start to add up over thousands of queries.


One way to avoid latency is to use COPY. To use PostgreSQL’s COPY support your application or driver has to produce a CSV-like set of rows and stream them to the server in a continuous sequence. Or the server can be asked to send your application a CSV-like stream.

Either way, the app can’t interleave a COPY with other queries, and copy-inserts must be loaded directly into a destination table. A common approach is to COPY into a temporary table, then from there do an INSERT INTO ... SELECT ..., UPDATE ... FROM ...., DELETE FROM ... USING..., etc to use the copied data to modify the main tables in a single operation.

That’s handy if you’re writing your own SQL directly, but many application frameworks and ORMs don’t support it, plus it can only directly replace simple INSERT. Your application, framework or client driver has to deal with conversion for the special representation needed by COPY, look up any required type metadata its self, etc.

(Notable drivers that do support COPY include libpq, PgJDBC, psycopg2, and the Pg gem… but not necessarily the frameworks and ORMs built on top of them.)

PgJDBC – batch mode

PostgreSQL’s JDBC driver has a solution for this problem. It relies on support present in PostgreSQL servers since 8.4 and on the JDBC API’s batching features to send a batch of queries to the server then wait only once for confirmation that the entire batch ran OK.

Well, in theory. In reality some implementation challenges limit this so that batches can only be done in chunks of a few hundred queries at best. The driver can also only run queries that return result rows in batched chunks if it can figure out how big the results will be ahead of time. Despite those limitations, use of Statement.executeBatch() can offer a huge performance boost to applications that are doing tasks like bulk data loading remote database instances.

Because it’s a standard API it can be used by applications that work across multiple database engines. Hibernate, for example, can use JDBC batching though it doesn’t do so by default.

libpq and batching

Most (all?) other PostgreSQL drivers have no support for batching. PgJDBC implements the PostgreSQL protocol completely independently, wheras most other drivers internally use the C library libpq that’s supplied as part of PostgreSQL.

libpq does not support batching. It does have an asynchronous non-blocking API, but the client can still only have one query “in flight” at a time. It must wait until the results of that query are received before it can send another.

The PostgreSQL server supports batching just fine, and PgJDBC uses it already. So I’ve written batch support for libpq and submitted it as a candidate for the next PostgreSQL version. Since it only changes the client, if accepted it’ll still speed things up when connecting to older servers.

I’d be really interested in feedback from authors and advanced users of libpq-based client drivers and developers of libpq-based applications. The patch applies fine on top of PostgreSQL 9.6beta1 if you want to try it out. The documentation is detailed and there’s a comprehensive example program.


I thought a hosted database service like RDS or Heroku Postgres would be a good example of where this kind of functionality would be useful. In particular, accessing them from ourside their own networks really shows how much latency can hurt.

At ~320ms network latency:

  • 500 inserts without batching: 167.0s
  • 500 inserts with batching: 1.2s

… which is over 120x faster.

You won’t usually be running your app over an intercontinental link between the app server and the database, but this serves to highlight the impact of latency. Even over a unix socket to localhost I saw over a 50% performance improvement for 10000 inserts.

Batching in existing apps

It is unfortunately not possible to automatically enable batching for existing applications. Apps have to use a slightly different interface where they send a series of queries and only then ask for the results.

It should be fairly simple to adapt apps that already use the asynchronous libpq interface, especially if they use non-blocking mode and a select()/poll()/epoll()/WaitForMultipleObjectsEx loop. Apps that use the synchronous libpq interfaces will require more changes.

Batching in other client drivers

Similarly, client drivers, frameworks and ORMs will generally need interface and internal changes to permit the use of batching. If they’re already using an event loop and non-blocking I/O they should be fairly simple to modify.

I’d love to see Python, Ruby, etc users able to access this functionality, so I’m curious to see who’s interested. Imagine being able to do this:

import psycopg2
conn = psycopg2.connect(...)
cur = conn.cursor()

# this is just an idea, this code does not work with psycopg2:
futures = [ cur.async_execute(sql) for sql in my_queries ]
for future in futures:
    result = future.result  # waits if result not ready yet
    ... process the result ...

Asynchronous batched execution doesn’t have to be complicated at the client level.

COPY is fastest

Where practical clients should still favour COPY. Here are some results from my laptop:

inserting 1000000 rows batched, unbatched and with COPY
batch insert elapsed:      23.715315s
sequential insert elapsed: 36.150162s
COPY elapsed:              1.743593s

Batching the work provides a surprisingly large performance boost even on a local unix socket connection…. but COPY leaves both individual insert approaches far behind it in the dust.


The image

The image for this post is of the Goldfields Water Supply Scheme pipeline from Mundaring Weir near Perth in Western Australia to the inland (desert) goldfields. It’s relevant because it took so long to finish and was under such intense criticism that its designer and main proponent, C. Y. O’Connor, committed suicide 12 months before it was put into commission. Locally people often (incorrectly) say that he died after the pipeline was built when no water flowed – because it just took so long everyone assumed the pipeline project had failed. Then weeks later, out the water poured.

2 replies
  1. Sergey
    Sergey says:

    > So I’ve written batch support for libpq and submitted it as a candidate for the next PostgreSQL version.

    Is it work other pgbouncer with transaction pooling mode?


Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *