2ndQuadrant is now part of EDB

Bringing together some of the world's top PostgreSQL experts.

2ndQuadrant | PostgreSQL
Mission Critical Databases
  • Contact us
  • EN
    • FR
    • IT
    • ES
    • DE
    • PT
  • Support & Services
  • Products
  • Downloads
    • Installers
      • Postgres Installer
      • 2UDA – Unified Data Analytics
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
  • Postgres Learning Center
    • Webinars
      • Upcoming Webinars
      • Webinar Library
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Blog
    • Training
      • Course Catalogue
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
    • Books
      • PostgreSQL 11 Administration Cookbook
      • PostgreSQL 10 Administration Cookbook
      • PostgreSQL High Availability Cookbook – 2nd Edition
      • PostgreSQL 9 Administration Cookbook – 3rd Edition
      • PostgreSQL Server Programming Cookbook – 2nd Edition
      • PostgreSQL 9 Cookbook – Chinese Edition
    • Videos
    • Events
    • PostgreSQL
      • PostgreSQL – History
      • Who uses PostgreSQL?
      • PostgreSQL FAQ
      • PostgreSQL vs MySQL
      • The Business Case for PostgreSQL
      • Security Information
      • Documentation
  • About Us
    • About 2ndQuadrant
    • 2ndQuadrant’s Passion for PostgreSQL
    • News
    • Careers
    • Team Profile
  • Blog
  • Menu Menu
You are here: Home1 / Blog2 / Featured3 / And Barman 1.6.0 is out!
Rubens Souza

And Barman 1.6.0 is out!

March 8, 2016/2 Comments/in Featured, Rubens' PlanetPostgreSQL /by Rubens Souza

Good news has come out to ensure your disaster recovery strategy is even better!

Wait…what?! You don’t have a disaster recovery strategy?! No good, my friend, no good…

But don’t despair, as I was saying, the good news is that Barman, the powerful backup and recovery manager for PostgreSQL, has just released its 1.6.0 version, which comes with important new features and, as expected, bug fixes.

barman-1.6.0 (1)

So, if you are already using Barman, just update it to start using the new features. And, if you don’t yet use it, now is the time you can begin elaborating your disaster recovery plan. It’s better you don’t wait…trust me on that one. 🙂

“And what are those new features?” you ask. Here we go:

Streaming connection & WAL streaming

This is the main feature of this release. Now Barman is capable of connecting to the database server and continuously receiving WAL files via PostgreSQL’s native streaming protocol, which will reduce RPO (Recovery Point Objective). In case the streaming connection fails for any reason, the standard log archiving takes control right away, making sure WALs are being archived. This version 1.6.0 still requires that standard archiving is in place (which means you will end up transferring WAL files twice, over two different channels – but this is a small price to pay for near zero RPO).

Enabling PostgreSQL streaming connection on the Barman server is pretty straightforward. Just open the configuration file of your backup server, that is by default inside /etc/barman.d/ directory, and add the following settings:

streaming_conninfo = host=your.postgresql.server user=streaming_barman
streaming_archiver = on
archiver = on
path_prefix = /path/to/pg_receivexlog/

Each line above provides the necessary set that Barman needs to successfully use the streaming connection:

  1. The first line configures streaming_conninfo in the same way as the already present conninfo does:
  • host indicates your PostgreSQL server.
  • user informs which user will be used for this connection. The user streaming_barman has to be created on your database server and granted access on pg_hba.conf accordingly.
  1. The second line activates the streaming_archiver
  2. The third line, as you can guess, activates archiver. Actually, archiver is activated by default, but I’m including this line here to make sure I tell you that. 🙂
  3. The fourth line indicates the directory where pg_receivexlog can be found. For example, it could be /usr/pgsql-9.5/bin/. Barman uses it for the WAL streaming and, as pg_receivexlog is not present on the Barman server by default, you have to install the PostgreSQL client to operate effectively. It isn’t necessary to install the PostgreSQL server on your Barman server, only the client.

As a last tip, for the streaming connection to work, make sure you have psycopg2 installed with version 2.4.2 or newer on your PostgreSQL server.

Implicit restore point

Although this feature has been present since version 1.5.1, as it wasn’t mentioned in the Barman documentation until now, I think it is worth listing here. Barman automatically creates a restore point in the form “barman_BACKUPID” immediately after the backup, allowing users to use this label during recovery through the use of the option --target-name.

New WAL compression algorithms

Besides the previous included gzip and bzip2 compression algorithms, now it is also possible to compress the WAL files using the pigz, pygzip, and pybzip2 algorithms (thanks to Stefano Zacchiroli and Christoph Moench-Tegeder). Do not forget that it is also possible to specify your own custom compression/decompression filter.

Customisation of binary paths

The new configuration option path_prefix allows you to list the directories where you want Barman to look for executables as, for example, pg_receivexlog used for WAL streaming. Above, on Streaming connection & WAL streaming section, it is shown how to use this option.

Final thoughts

Barman has been a well known choice when DR comes to mind and if you want to get a complete list of its features, as well as information about how to configure and use it, the Barman documentation is a helpful place to check and it’s always on hand.

Tags: backup, Barman, disaster recovery, DR, pgbarman, PostgreSQL, receive-wal, replication, RPO, streaming, streaming_archiver, wal, WAL streaming
Share this entry
  • Share on Facebook
  • Share on Twitter
  • Share on WhatsApp
  • Share on LinkedIn
2 replies
  1. Ka Kit Wong
    Ka Kit Wong says:
    August 24, 2016 at 11:13 am

    I was wondering how barman will behave when:
    1. physical replication slots are used.
    2. logical replication slots and a CDC decoder plugin is used on the master.
    In the first case, when a client is offline it needs to keep its wals available.
    In the second case, will it wait for treatment of the wal untill the get_changes is called or will it be archived anyway?
    In both cases what is then the expected way barman full backup will behave?

    Reply
    • Petr Jelinek
      Petr Jelinek says:
      August 24, 2016 at 5:39 pm

      Removal of WAL by archiving (barman) is orthogonal to replication slots. The replication slots will keep the WAL file on the master even if it was already archived.

      Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Get in touch with us!

Recent Posts

  • Random Data December 3, 2020
  • Webinar: COMMIT Without Fear – The Beauty of CAMO [Follow Up] November 13, 2020
  • Full-text search since PostgreSQL 8.3 November 5, 2020
  • Random numbers November 3, 2020
  • Webinar: Best Practices for Bulk Data Loading in PostgreSQL [Follow Up] November 2, 2020

Featured External Blogs

Tomas Vondra's Blog

Our Bloggers

  • Simon Riggs
  • Alvaro Herrera
  • Andrew Dunstan
  • Craig Ringer
  • Francesco Canovai
  • Gabriele Bartolini
  • Giulio Calacoci
  • Ian Barwick
  • Marco Nenciarini
  • Mark Wong
  • Pavan Deolasee
  • Petr Jelinek
  • Shaun Thomas
  • Tomas Vondra
  • Umair Shahid

PostgreSQL Cloud

2QLovesPG 2UDA 9.6 backup Barman BDR Business Continuity community conference database DBA development devops disaster recovery greenplum Hot Standby JSON JSONB logical replication monitoring OmniDB open source Orange performance PG12 pgbarman pglogical PG Phriday postgres Postgres-BDR postgres-xl PostgreSQL PostgreSQL 9.6 PostgreSQL10 PostgreSQL11 PostgreSQL 11 PostgreSQL 11 New Features postgresql repmgr Recovery replication security sql wal webinar webinars

Support & Services

24/7 Production Support

Developer Support

Remote DBA for PostgreSQL

PostgreSQL Database Monitoring

PostgreSQL Health Check

PostgreSQL Performance Tuning

Database Security Audit

Upgrade PostgreSQL

PostgreSQL Migration Assessment

Migrate from Oracle to PostgreSQL

Products

HA Postgres Clusters

Postgres-BDR®

2ndQPostgres

pglogical

repmgr

Barman

Postgres Cloud Manager

SQL Firewall

Postgres-XL

OmniDB

Postgres Installer

2UDA

Postgres Learning Center

Introducing Postgres

Blog

Webinars

Books

Videos

Training

Case Studies

Events

About Us

About 2ndQuadrant

What does 2ndQuadrant Mean?

News

Careers 

Team Profile

© 2ndQuadrant Ltd. All rights reserved. | Privacy Policy
  • Twitter
  • LinkedIn
  • Facebook
  • Youtube
  • Mail
BRIN for PostGIS: my story at the Code Sprint 2016 in Paris Sharding: Bringing back Postgres-XL technology into core PostgreSQL
Scroll to top
×