2ndQuadrant is now part of EDB

Bringing together some of the world's top PostgreSQL experts.

2ndQuadrant | PostgreSQL
Mission Critical Databases
  • Contact us
  • EN
    • FR
    • IT
    • ES
    • DE
    • PT
  • Support & Services
  • Products
  • Downloads
    • Installers
      • Postgres Installer
      • 2UDA – Unified Data Analytics
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
  • Postgres Learning Center
    • Webinars
      • Upcoming Webinars
      • Webinar Library
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Blog
    • Training
      • Course Catalogue
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
    • Books
      • PostgreSQL 11 Administration Cookbook
      • PostgreSQL 10 Administration Cookbook
      • PostgreSQL High Availability Cookbook – 2nd Edition
      • PostgreSQL 9 Administration Cookbook – 3rd Edition
      • PostgreSQL Server Programming Cookbook – 2nd Edition
      • PostgreSQL 9 Cookbook – Chinese Edition
    • Videos
    • Events
    • PostgreSQL
      • PostgreSQL – History
      • Who uses PostgreSQL?
      • PostgreSQL FAQ
      • PostgreSQL vs MySQL
      • The Business Case for PostgreSQL
      • Security Information
      • Documentation
  • About Us
    • About 2ndQuadrant
    • 2ndQuadrant’s Passion for PostgreSQL
    • News
    • Careers
    • Team Profile
  • Blog
  • Menu Menu
You are here: Home1 / Blog2 / Shaun's PlanetPostgreSQL3 / PG Phriday: Getting RAD with Docker [Part 1]
Shaun Thomas

PG Phriday: Getting RAD with Docker [Part 1]

October 20, 2017/6 Comments/in Shaun's PlanetPostgreSQL /by Shaun Thomas

Fans of Rapid Application Development (RAD!) are probably already familiar with Docker, but what does that have to do with Postgres? Database-driven applications are a dime a dozen these days, and a good RAD environment is something of a Holy Grail to coders and QA departments alike. Docker lets us spin up a Postgres instance in seconds, and discard it with a clean conscience.

There have even been some noises within certain circles about using it in a production context. Can we do something like that responsibly? Docker containers are practically guaranteed to be ephemeral, while production data most decidedly isn’t. The answer to this is ultimately complex, and something we’ll be exploring over the next several weeks.

Let’s get started.

Let There Be Light

Since Docker itself is a commercial product, getting the latest community packages (Docker CE) means making a free account on their website and downloading the packages for your preferred platform. For slightly older versions, RHEL and CentOS variants can install EPEL, where and the docker-io package resides. Debian variants such as Ubuntu can simply install docker.io with apt.

Once we have Docker itself, we can actually start playing with it right away. We should start by grabbing the latest version of Postgres from the official Docker Hub image:

$> docker pull postgres:latest

latest: Pulling from library/postgres
3e17c6eae66c: Pull complete 
3d89ae9a47e4: Pull complete 
f7726fda7efe: Pull complete 
d1838499bd8f: Pull complete 
a5ec5aa60735: Pull complete 
1571d7170291: Pull complete 
0d6e41e13732: Pull complete 
787e3c45a9bb: Pull complete 
7b234cf83b22: Pull complete 
3a8ad2440289: Pull complete 
9351993374c0: Pull complete 
a8f3575e09a1: Pull complete 
a4c4b2ff0c3a: Pull complete 
Digest: sha256:73a1c4e98fb961bb4a5c55ad6428470a3303bd3966abc442fe937814f6bbc002
Status: Downloaded newer image for postgres:latest

$> docker images

REPOSITORY    TAG         IMAGE ID          CREATED           SIZE
postgres      latest      b106d5a0dc75      22 hours ago      287.2 MB

Since Docker images can build on top of each other, we can see that Postgres is comprised of a long chain of dependencies. Despite that, the image itself isn’t very large; 287MB for an entire operating system and database server is pretty good. There’s even smaller versions if we are willing to forgo certain UNIX tools. The ‘postgres:10-alpine’ image for instance, is just under 40MB.

Runnin’ with the Devil

Just having an image is pretty pointless, of course. Let’s start a container and see what’s inside. This is done with the docker run command. We’ll pass it a few parameters to make things easier to use later as well. Setting the name, for example, lets us manipulate the container later without having to use an ugly MD5 hash.

Since Docker containers run on a separate network layer, we want to publish the default Postgres 5432 port to 5555 to prevent conflicts with existing applications on the host system. And finally, we want to detach from the terminal so we can use it for other things, and that’s also probably how we’ll be using the container in any reasonable configuration.

$> docker run --name pg-test -p 5555:5432 -d postgres

69f21490ff415e485501cd9c3ed04336d7c4147f77025b7de8dbf565b11fdf52

$> docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
69f21490ff41        postgres            "docker-entrypoint.sh"   40 seconds ago      Up 39 seconds       0.0.0.0:5555->5432/tcp   pg-test

$> psql -p 5555 -U postgres -h localhost

Timing is on.
Pager usage is off.
psql (10.0)
Type "help" for help.

postgres=# SELECT version();
                                             version                                              
--------------------------------------------------------------------------------------------------
 PostgreSQL 10.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18) 6.3.0 20170516, 64-bit
(1 row)

Run and done. We can connect to the container on port 5555 as expected, we have the latest Postgres 10, and Postgres confirms its full version and build environment. Let’s try to make some tables.

Playing God

We’ve used the sensor_log table a lot in the past. A good reason for that is due to the sheer size of the data we normally insert. There’s no reason to stop now, is there? Let’s keep our connection and do a couple of basic timings:

\timing on

CREATE TABLE sensor_log (
  sensor_log_id  SERIAL PRIMARY KEY,
  location       VARCHAR NOT NULL,
  reading        BIGINT NOT NULL,
  reading_date   TIMESTAMP NOT NULL
);

Time: 7.945 ms

INSERT INTO sensor_log (location, reading, reading_date)
SELECT s.id % 1000, round(random() * 100),
       CURRENT_DATE + INTERVAL '1d' - ((s.id * 10)::TEXT || 's')::INTERVAL
  FROM generate_series(1, 5000000) s(id);

Time: 30981.570 ms (00:30.982)

CREATE INDEX idx_sensor_log_location ON sensor_log (location);

Time: 9339.747 ms (00:09.340)

CREATE INDEX idx_sensor_log_date ON sensor_log (reading_date);

Time: 3689.406 ms (00:03.689)

ANALYZE sensor_log;

Time: 247.532 ms

SELECT count(*) FROM sensor_log;

  count  
---------
 5000000

Time: 518.908 ms

SELECT count(*) FROM sensor_log
 WHERE location = '50';

 count 
-------
  5000

Time: 8.243 ms

SELECT count(*) FROM sensor_log
 WHERE reading_date BETWEEN '2016-01-01' AND '2016-06-01';

 count  
--------
 680001
(1 row)

Time: 139.237 ms

One weakness of this approach is that the Postgres data exists entirely within this single container. Since containers are intended to be temporary, that’s not great if we want to use the container as a somewhat bulky linked binary. We want to be able to kill—and even erase—our container in some cases, yet retain the data.

How do we manage that?

Sharing the Wealth

One way to do this with Docker is volumes. We can build a volume and start another container that uses it like this:

$> docker volume create --name pgdata
$> docker run --name pg-mount -p 5600:5432 -d \
              --volume pgdata:/var/lib/postgresql/data postgres

We created a pgdata volume, then used told Docker that the volume should be mounted at /var/lib/postgresql/data. This is the default location used by the maintainers of the official image, so a new container will use the external volume instead of the built-in storage.

Let’s connect to this new container, make a very small proof-of-concept table, then stop and erase the container. Then we can create a new container that uses the same volume, and see if the table is still there.

$> psql -p 5600 -U postgres -h localhost \
        -c "CREATE TABLE foo (id SERIAL);"

$> docker stop pg-mount
$> docker rm pg-mount
$> docker run --name pg-new-mount -p 5600:5432 -d \
              --volume pgdata:/var/lib/postgresql/data postgres

psql -p 5600 -U postgres -h localhost -c "\d foo"

                            Table "public.foo"
 Column |  Type   | Collation | Nullable |             Default             
--------+---------+-----------+----------+---------------------------------
 id     | integer |           | not null | nextval('foo_id_seq'::regclass)

Nice, eh? Now we can test various container variants, or image builds, against the same data volume.

Starting from Scratch

But what about performance? Docker containers utilize Copy on Write (CoW) on top of a proprietary filesystem. Is it possible that this abstraction layer could be interfering with our desired use of the hardware?

One of the handy things Docker can do is mount locations on the host machine. Let’s start another image by mapping to a drive on the Docker host, and see if the timings change at all.

We start by launching a new Docker container with the desired mount. The cool thing about the --volume flag is that it will create the directory for us if it doesn’t already exist. It’ll even give it the correct permissions, since Docker itself runs as a management daemon under the root user, and not the postgres user we’re probably accustomed to.

$> docker run --name pg-local -p 5700:5432 -d \
              --volume /db/docker:/var/lib/postgresql/data postgres

$> ls -l /db

drwx------ 19 vboxadd  root  4096 Oct 13 17:09 docker

Now let’s connect to post 5700 and repeat our sensor_log timings:

\timing on

CREATE TABLE sensor_log (
  sensor_log_id  SERIAL PRIMARY KEY,
  location       VARCHAR NOT NULL,
  reading        BIGINT NOT NULL,
  reading_date   TIMESTAMP NOT NULL
);

Time: 7.977 ms

INSERT INTO sensor_log (location, reading, reading_date)
SELECT s.id % 1000, round(random() * 100),
       CURRENT_DATE + INTERVAL '1d' - ((s.id * 10)::TEXT || 's')::INTERVAL
  FROM generate_series(1, 5000000) s(id);

Time: 30180.467 ms (00:30.180)

CREATE INDEX idx_sensor_log_location ON sensor_log (location);

Time: 9279.165 ms (00:09.279)

CREATE INDEX idx_sensor_log_date ON sensor_log (reading_date);

Time: 3268.765 ms (00:03.269)

ANALYZE sensor_log;

Time: 205.313 ms

SELECT count(*) FROM sensor_log;

  count  
---------
 5000000

Time: 483.312 ms

SELECT count(*) FROM sensor_log
 WHERE location = '50';

 count 
-------
  5000

Time: 7.234 ms

SELECT count(*) FROM sensor_log
 WHERE reading_date BETWEEN '2016-01-01' AND '2016-06-01';

 count  
--------
 680001
(1 row)

Time: 141.441 ms

It turns out the filesystem mapping is comparable to Docker’s own filesystem, as the differences are statistically insignificant. Of course, now instead of using its own internal filesystem layer, Docker must maintain a mapping to the local filesystem, so it’s hard to say what’s actually going on behind the scenes. We’re also operating in a relatively puny virtual environment with its own storage performance limitations.

Still, a commendable result, as it leaves us the option use standard filesystem paths. LVM? Sure. XFS? Why not. An external SAN device comprised entirely of SSDs? Of course. There are interesting implications here.

Out of Eden

So are containers really just a bloated linked Postgres binary, since we could have just managed a local directory with a standard Postgres install? Yes and no. We’ll definitely explore more of these concepts in later articles, but it’s important to note that containers are meant for chaining and automation.

A standard Postgres install has a certain amount of administration involved. Maybe it’s just the configuration file, or user creation, or package maintenance. There is a lot to building a Postgres server. Since containers themselves are a tiny Postgres server, we don’t really need to do any of that in simplified contexts.

Did the official Postgres package see an update? We can just update to the latest image, stop the existing container, and start a new one. The old container is still there, so reverting is trivial.

We can make our own image with contrib, and plpython, and pglogical, and launch that anywhere and everywhere. We can hand it to developers that don’t know anything about Postgres, and they can have their own database server that mirrors the production environment. And we can do it without an army of resource-intensive and difficult to provision VMs.

This is just the very tiniest tip of the proverbial iceberg. We’ll continue to build our toolkit in future articles, and maybe end up with something that isn’t arguably worse than a standard server environment, just different.

Tags: 2QLovesPG, Containers, Docker, PG Phriday, Rapid Application Development
Share this entry
  • Share on Facebook
  • Share on Twitter
  • Share on WhatsApp
  • Share on LinkedIn
6 replies
  1. Craig H Maynard
    Craig H Maynard says:
    October 21, 2017 at 7:49 pm

    Looks like you have psql installed locally. Does that mean you also installed POSTGRES locally? If so, why use a Docker container?

    Reply
    • Shaun Thomas
      Shaun Thomas says:
      October 22, 2017 at 5:05 pm

      Why not? Maybe I want to connect from another system that already has the psql client installed. There’s no hard or fast rule that dictates no local clients or libraries can ever be installed on a Docker host, or that every command absolutely must be executed from within another docker container. Keeping the server contained makes it portable; clients can exist anywhere.

      Reply
  2. Frank Gard
    Frank Gard says:
    October 22, 2017 at 8:30 am

    Great article! Thanks a lot.
    One marginal correction: According to the current Docker docs (https://docs.docker.com/engine/installation/linux/docker-ce/debian/ and https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/), one should use the package “docker-ce”, and not “docker.io”, for Debian based systems.
    Cheers,
    Frank.

    Reply
    • Shaun Thomas
      Shaun Thomas says:
      October 22, 2017 at 5:00 pm

      Thanks! I was looking for that pesky documentation everywhere. I could only find download links from the official docker site, not references on how to enable their repos directly. That’s why I noted that the local repos have “older versions”, and the CE versions are available from Docker if you want to have updated packages. At least with the Ubuntu 16.04 LTS repos I tested with; it looks like pure Debian has them in its backports.

      Reply
  3. Rama Shankar
    Rama Shankar says:
    October 23, 2017 at 6:11 pm

    Hi
    This is Rama Shankar.. Is there way to create 2 postgres container having same volume.. If I will create database objects in first container it will affect in another container.

    Reply
    • Shaun Thomas
      Shaun Thomas says:
      October 23, 2017 at 8:44 pm

      The postgres image is designed to only bootstrap a new database directory if the existing one is empty. So if you have two containers using the same volume as in the example with the ‘pgdata’ volume, the first container would create a new database instance there. Any subsequent container could then mount it and operate normally, assuming you were careful to ensure only one container had it mounted at a time.

      Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Get in touch with us!

Recent Posts

  • Random Data December 3, 2020
  • Webinar: COMMIT Without Fear – The Beauty of CAMO [Follow Up] November 13, 2020
  • Full-text search since PostgreSQL 8.3 November 5, 2020
  • Random numbers November 3, 2020
  • Webinar: Best Practices for Bulk Data Loading in PostgreSQL [Follow Up] November 2, 2020

Featured External Blogs

Tomas Vondra's Blog

Our Bloggers

  • Simon Riggs
  • Alvaro Herrera
  • Andrew Dunstan
  • Craig Ringer
  • Francesco Canovai
  • Gabriele Bartolini
  • Giulio Calacoci
  • Ian Barwick
  • Marco Nenciarini
  • Mark Wong
  • Pavan Deolasee
  • Petr Jelinek
  • Shaun Thomas
  • Tomas Vondra
  • Umair Shahid

PostgreSQL Cloud

2QLovesPG 2UDA 9.6 backup Barman BDR Business Continuity community conference database DBA development devops disaster recovery greenplum Hot Standby JSON JSONB logical replication monitoring OmniDB open source Orange performance PG12 pgbarman pglogical PG Phriday postgres Postgres-BDR postgres-xl PostgreSQL PostgreSQL 9.6 PostgreSQL10 PostgreSQL11 PostgreSQL 11 PostgreSQL 11 New Features postgresql repmgr Recovery replication security sql wal webinar webinars

Support & Services

24/7 Production Support

Developer Support

Remote DBA for PostgreSQL

PostgreSQL Database Monitoring

PostgreSQL Health Check

PostgreSQL Performance Tuning

Database Security Audit

Upgrade PostgreSQL

PostgreSQL Migration Assessment

Migrate from Oracle to PostgreSQL

Products

HA Postgres Clusters

Postgres-BDR®

2ndQPostgres

pglogical

repmgr

Barman

Postgres Cloud Manager

SQL Firewall

Postgres-XL

OmniDB

Postgres Installer

2UDA

Postgres Learning Center

Introducing Postgres

Blog

Webinars

Books

Videos

Training

Case Studies

Events

About Us

About 2ndQuadrant

What does 2ndQuadrant Mean?

News

Careers 

Team Profile

© 2ndQuadrant Ltd. All rights reserved. | Privacy Policy
  • Twitter
  • LinkedIn
  • Facebook
  • Youtube
  • Mail
pglogical with OmniDB Oracle High Availability concepts in PostgreSQL
Scroll to top
×