Automating Barman with Puppet: it2ndq/barman (part three)

robotIn the second part of the Automating Barman with Puppet series we configured, via Puppet, two virtual machines: a PostgreSQL server and a Barman server to back it up. However, human intervention was required to perform the SSH key exchange and most of the manifest was written to allow the servers to access each other. In this third and final part of the series, we will look at how to configure a third VM that will act as the Puppet Master and use it to simplify the configuration of PostgreSQL and Barman. 

The entire code of this tutorial is on GitHub at http://github.com/2ndquadrant-it/vagrant-puppet-barman.

Configuring the Puppet Master: Vagrant

First, change the Vagrantfile to boot a third VM, called “puppet”, which will be our Puppet Master. To ensure that the machine is instantly accessible by the Puppet agents present on each VM, we add a “puppet” entry in the /etc/hosts file with the first script we run. We need also to enable the Puppet agent, as Debian-like distributions disable it by default.
Finally, within the Vagrantfile, let’s make a distinction between master and agents. The master will initially load its configuration straight from the manifest files, then the agents running on each host will apply the configuration sent from the master. Agents will also send data back to the master allowing other nodes to use it to build their configuration. For this reason, an agent is also set to run on the master.

The Vagrantfile is as follows:

Vagrant.configure("2") do |config|
  {
    :puppet => {
      :ip      => '192.168.56.220',
      :box     => 'ubuntu/trusty64',
      :role    => 'master'
    },
    :pg => {
      :ip      => '192.168.56.221',
      :box     => 'ubuntu/trusty64',
      :role    => 'agent'
    },
    :backup => {
      :ip      => '192.168.56.222',
      :box     => 'ubuntu/trusty64',
      :role    => 'agent'
    }
  }.each do |name,cfg|
    config.vm.define name do |local|
      local.vm.box = cfg[:box]
      local.vm.hostname = name.to_s + '.local.lan'
      local.vm.network :private_network, ip: cfg[:ip]
      family = 'ubuntu'
      bootstrap_url = 'http://raw.github.com/hashicorp/puppet-bootstrap/master/' + family + '.sh'

      # Run puppet-bootstrap and enable the Puppet agent
      local.vm.provision :shell, :inline => <<-eos
        if [ ! -e /var/tmp/.bash.provision.done ]; then
          echo "192.168.56.220  puppet.local.lan        puppet puppetdb puppetdb.local.lan" >> /etc/hosts
          curl -L #{bootstrap_url} | bash
          puppet agent --enable
          touch /var/tmp/.bash.provision.done
        fi
      eos

      if cfg[:role] == 'master'
        # Puppet master needs RAM
        local.vm.provider "virtualbox" do |v|
          v.memory = 1024
        end

        # Provision the master with Puppet
        local.vm.provision :puppet do |puppet|
          puppet.manifests_path = "manifests"
          puppet.module_path = [".", "modules"]
          puppet.manifest_file = "site.pp"
          puppet.options = [
           '--verbose',
          ]
        end
      end

      # Puppet agents should be provisioned by the master
      local.vm.provision :puppet_server do |puppet|
        puppet.options = [
         '--verbose',
        ]
      end

    end
  end
end

Configuring the Puppet Master: Puppet

Once we have the Vagrantfile, it’s time to go and write a Puppet manifest for the master. Two additional modules are required: puppetlabs/puppetdb and stephenrjonson/puppet. puppetlabs/puppetdb configures PuppetDB.

PuppetDB uses a PostgreSQL database to collect the events and resources exported by the infrastructure nodes so they can exchange information and configure each other.

stephenrjonson/puppet allows you to configure a Puppet Master with Apache and Passenger as well as the Puppet agents on the various nodes of the network.

Our Puppetfile will look like this:

forge 'http://forgeapi.puppetlabs.com'
mod 'it2ndq/barman'
mod 'puppetlabs/postgresql'
mod 'puppetlabs/puppetdb'
mod 'stephenrjohnson/puppet'

We can now run

$ librarian-puppet install --verbose

to install the new modules.

At this point we can edit the site.pp manifest, adding the puppet node with the following snippet for PuppetDB and the Puppet Master:

  # Setup PuppetDB
  class { 'puppetdb': }->
  # Setup Puppet Master, Apache and Passenger
  class { 'puppet::master':
    storeconfigs => true,
    autosign     => true,
    environments => 'directory',
  }->

We have thus configured the Puppet Master to automatically accept connections from all the machines (autosign) and distribute catalogues, events and exported resources (storeconfig). Finally, we use the directory environments of Puppet to distribute the catalogue to the agents. The standard directory for environments is /etc/puppet/environments and the default environment is production. Our manifests and modules will belong to it. As Vagrant already shares the directory where the Vagrantfile is located with the machines it creates, we can make a symbolic link to it:

  # Have the manifest and the modules available for the master to distribute
  file {
    ['/etc/puppet/environments', '/etc/puppet/environments/production']:
      ensure => directory;
    '/etc/puppet/environments/production/modules':
      ensure => 'link',
      target => '/vagrant/modules';
    '/etc/puppet/environments/production/manifests':
      ensure => 'link',
      target => '/vagrant/manifests';
  }

We need to configure the agent on every node, choose how it should be run and which environment to use, and point it towards the Puppet Master. Running the agent via cron takes up fewer resources than running it as a daemon:

  # Configure Puppet Agent
  class { 'puppet::agent':
    puppet_run_style => 'cron',
    puppet_server    => 'puppet.local.lan',
    environment      => 'production',
  }

We can now begin sharing resources between the nodes. The pg and backup nodes will need to communicate with each other via SSH, so they will need to know the ip of the other server and to contain its key in known_hosts. We export and collect these resources on each node, as shown in the following snippet:

  @@host { 'backup_host':
    ensure       => 'present',
    name         => $::fqdn,
    host_aliases => $::hostname,
    ip           => '192.168.56.222',
  }

  @@sshkey { "${::hostname}_ecdsa":
    host_aliases => [ $::hostname, $::fqdn ],
    type         => 'ecdsa-sha2-nistp256',
    key          => $::sshecdsakey,
  }

  # Collect:
  Host <<| |>>
  Sshkey <<| |>>

barman::autoconfigure

We now have everything we need to configure the PostgreSQL server and the Barman server. With the ability to use the autoconfiguration, the next step has just become much easier. For backup, it’s as simple as setting the autoconfigure parameter and exporting the right ip address. The Vagrant machines have two ip addresses, so we must force backup to use 192.168.56.222. Moreover, we are going to use the PGDG Barman package, enabling manage_package_repo:

  class { 'barman':
    autoconfigure       => true,
    exported_ipaddress  => '192.168.56.222/32',
    manage_package_repo => true,
  }

On the pg node we install the PostgreSQL server and, through the barman::postgres class, declare how Barman manages it. The class exports the cron for the execution of the barman backup pg command and the definition of the server for Barman that will be imported by the backup server via autoconfigure:

  # Configure PostgreSQL
  class { 'postgresql::server':
    listen_addresses     => '*',
  }

  # Export the parameters required by Barman
  class { 'barman::postgres':
    retention_policy        => 'RECOVERY WINDOW OF 1 WEEK',
    minimum_redundancy      => 1,
    last_backup_maximum_age => '1 WEEK',
    reuse_backup            => 'link',
    backup_hour             => 1,
    backup_minute           => 0,
  }

Testing

Everything we have looked at so far can be tested by cloning the project on GitHub and executing the following commands in the newly-created directory:

$ librarian-puppet install --verbose
$ vagrant up
$ vagrant provision
$ vagrant provision

The system has to perform three provisionings (the first is included in the first vagrant up) before all the exported resources are collected from the nodes. At this point we can log into the backup machine and check that backups can be performed:

$ vagrant ssh backup
[email protected]:~# barman backup all
Starting backup for server pg in /var/lib/barman/pg/base/20150320T114208
Backup start at xlog location: 0/2000028 (000000010000000000000002, 00000028)
Copying files.
Copy done.
Backup size: 18.7 MiB. Actual size on disk: 18.7 MiB (-0.00% deduplication ratio).
Asking PostgreSQL server to finalize the backup.
Backup end at xlog location: 0/20000B8 (000000010000000000000002, 000000B8)
Backup completed
Processing xlog segments for pg
        Older than first backup. Trashing file 000000010000000000000001 from server pg
        000000010000000000000002
        000000010000000000000002.00000028.backup

Conclusion

Although the initial configuration of a Puppet Master can be laborious, its benefits are enormous. Not only is the configuration of Barman much easier – any other addition to the infrastructure is significantly simplified. For example, adding an Icinga or Nagios server becomes much simpler when every single server is able to export the services that need to be monitored (check_postgres or barman check --nagios).

Also, in the above example, we used a single PostgreSQL server and a Barman server, but in case of complex infrastructures with many database servers, it is possible to declare multiple Barman servers and use host_group to identify the Postgres servers which the Barman servers should backup.

Thank you for reading the Automating Barman with Puppet series, I hope it has been useful and would love to know your thoughts.
Finally, a special thank you goes to Alessandro Franceschi for the initial idea of adding an autoconfiguration system to the Barman module.

1 reply
  1. Fix Error 651
    Fix Error 651 says:

    I do agree with all the ideas you have offered in your post.
    They’re really convincing and can definitely work.

    Nonetheless, the posts are too quick for novices.
    May just you please extend them a little from subsequent time?

    Thanks for the post.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *