2ndQuadrant is now part of EDB

Bringing together some of the world's top PostgreSQL experts.

2ndQuadrant | PostgreSQL
Mission Critical Databases
  • Contact us
  • EN
    • FR
    • IT
    • ES
    • DE
    • PT
  • Support & Services
  • Products
  • Downloads
    • Installers
      • Postgres Installer
      • 2UDA – Unified Data Analytics
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
  • Postgres Learning Center
    • Webinars
      • Upcoming Webinars
      • Webinar Library
    • Whitepapers
      • Business Case for PostgreSQL Support
      • Security Best Practices for PostgreSQL
    • Blog
    • Training
      • Course Catalogue
    • Case Studies
      • Performance Tuning
        • BenchPrep
        • tastyworks
      • Distributed Clusters
        • ClickUp
        • European Space Agency (ESA)
        • Telefónica del Sur
        • Animal Logic
      • Database Administration
        • Agilis Systems
      • Professional Training
        • Met Office
        • London & Partners
      • Database Upgrades
        • Alfred Wegener Institute (AWI)
      • Database Migration
        • International Game Technology (IGT)
        • Healthcare Software Solutions (HSS)
        • Navionics
    • Books
      • PostgreSQL 11 Administration Cookbook
      • PostgreSQL 10 Administration Cookbook
      • PostgreSQL High Availability Cookbook – 2nd Edition
      • PostgreSQL 9 Administration Cookbook – 3rd Edition
      • PostgreSQL Server Programming Cookbook – 2nd Edition
      • PostgreSQL 9 Cookbook – Chinese Edition
    • Videos
    • Events
    • PostgreSQL
      • PostgreSQL – History
      • Who uses PostgreSQL?
      • PostgreSQL FAQ
      • PostgreSQL vs MySQL
      • The Business Case for PostgreSQL
      • Security Information
      • Documentation
  • About Us
    • About 2ndQuadrant
    • 2ndQuadrant’s Passion for PostgreSQL
    • News
    • Careers
    • Team Profile
  • Blog
  • Menu Menu
You are here: Home1 / Blog2 / Webinars3 / Webinar: pg_catalog Unveiled! [Follow Up]
Bilal Ibrar

Webinar: pg_catalog Unveiled! [Follow Up]

April 13, 2020/0 Comments/in Webinars /by Bilal Ibrar

PostgreSQL users are always interested in their system’s performance to know if any improvements need to be made or to generate the occasional health report requested by their managers. 

Standard monitoring tools are available to monitor the CPU, RAM, and I/O consumption, but they won’t be able to tell you if indexes are being used, if tables are bloated, if there is a lag of replication or a variety of other interesting and useful things within the PostgreSQL environment. 

To explore these features and more, 2ndQuadrant held a live webinar titled “pg_catalog Unveiled!”, hosted by Boriss Mejías (PostgreSQL Consultant at 2ndQuadrant).

This webinar reviewed the possibilities offered by the PostgreSQL catalog. We also explored how to exploit the catalog, how to send the information to other monitoring tools, and how the tables in pg_catalog are fundamentally interconnected to other matters such as performance, replication, MVCC, security, etc.

Those who weren’t able to attend the live webinar can now view the recording here.

Due to limited time, some of the questions were not answered during the live webinar; so answers by the host are provided below:


Question: Do we need to run vacuum and analyze for catalog tables as well?

Answer: Note that catalog tables also need to be vacuumed and analyzed. This is done automatically by autovacuum, but you could also monitor the bloat of the catalog tables by looking at `pg_stat_sys_tables`. Also, note that many of the tables we use during the talk are `views`, not physical tables.

 

Question: After seeing the temp files, how do you release those temp files?

Answer: During the presentation, we discussed investigating temporary files to know if we need a higher value for `work_mem`. Those temporary files are managed by PostgreSQL, you don’t have to remove them yourself. PostgreSQL will remove them immediately once the query has finished.

 

Question: We have a lot of open source utilities to monitor Postgres — can you name some of the very useful ones?

Answer: Icinga and Nagios are very popular. Although OmniDB is not a monitoring tool, it has a monitoring dashboard.

 

Question: Where can we track deadlocks in PostgreSQL?

Answer: We can only observe the number of deadlock in the catalog as we saw during the presentation. To track the details of each deadlock occurrence, you have to check PostgreSQL logs. They are always logged.

 

Question: If query performance goes bad after ANALYZE is there a way to rollback stats from stats catalog tables?

Answer: No, there is no way of rolling back statistics. But if you are getting bad performances after running ANALYZE, you may have special data distribution and you should have a look at CREATE STATISTICS. 

 

Question: I have the following results of temporary files: temp_files 380, pg_size_pretty 227 GB, I must do something?

Answer: Yes, activate log temporary files and try to find a size that covers 80%-90% of your temporary files, and increase `work_mem` to that value. Make sure you have enough RAM for that.

 

Question: Hi, How can we get some kind of slow queries history?

Answer: Not in the catalog, but see pg_stat_statements

 

Question: What’s the difference in design rationale between information_schema and pg_catalog?

Answer: Information_schema follows the design of the SQL standard. pg_catalog is designed by the PostgreSQL developers team to suit the specific needs of PostgreSQL. Many of the data in information_schema are views to pg_catalog.

 

Question: We sometimes see the autovacuum worker process is running in the middle of the day, can we move them in the maintenance window? what are the pros and cons?

Answer: Scheduling the VACUUM job at night will let you start the days with clean tables, delaying the first run of autovacuum. Don’t disable autovacuum because if you don’t run autovacuum when needed, the planner will make decisions based on outdated stats, and the tables will contain more bloat than recommended, which is also bad for performance.


To stay updated on upcoming webinars by 2ndQuadrant, you can visit our Webinars page.

For any questions, comments, or feedback, please visit our website or send an email to [email protected].

Tags: PostgreSQL, webinars
Share this entry
  • Share on Facebook
  • Share on Twitter
  • Share on WhatsApp
  • Share on LinkedIn
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Get in touch with us!

Recent Posts

  • Random Data December 3, 2020
  • Webinar: COMMIT Without Fear – The Beauty of CAMO [Follow Up] November 13, 2020
  • Full-text search since PostgreSQL 8.3 November 5, 2020
  • Random numbers November 3, 2020
  • Webinar: Best Practices for Bulk Data Loading in PostgreSQL [Follow Up] November 2, 2020

Featured External Blogs

Tomas Vondra's Blog

Our Bloggers

  • Simon Riggs
  • Alvaro Herrera
  • Andrew Dunstan
  • Craig Ringer
  • Francesco Canovai
  • Gabriele Bartolini
  • Giulio Calacoci
  • Ian Barwick
  • Marco Nenciarini
  • Mark Wong
  • Pavan Deolasee
  • Petr Jelinek
  • Shaun Thomas
  • Tomas Vondra
  • Umair Shahid

PostgreSQL Cloud

2QLovesPG 2UDA 9.6 backup Barman BDR Business Continuity community conference database DBA development devops disaster recovery greenplum Hot Standby JSON JSONB logical replication monitoring OmniDB open source Orange performance PG12 pgbarman pglogical PG Phriday postgres Postgres-BDR postgres-xl PostgreSQL PostgreSQL 9.6 PostgreSQL10 PostgreSQL11 PostgreSQL 11 PostgreSQL 11 New Features postgresql repmgr Recovery replication security sql wal webinar webinars

Support & Services

24/7 Production Support

Developer Support

Remote DBA for PostgreSQL

PostgreSQL Database Monitoring

PostgreSQL Health Check

PostgreSQL Performance Tuning

Database Security Audit

Upgrade PostgreSQL

PostgreSQL Migration Assessment

Migrate from Oracle to PostgreSQL

Products

HA Postgres Clusters

Postgres-BDR®

2ndQPostgres

pglogical

repmgr

Barman

Postgres Cloud Manager

SQL Firewall

Postgres-XL

OmniDB

Postgres Installer

2UDA

Postgres Learning Center

Introducing Postgres

Blog

Webinars

Books

Videos

Training

Case Studies

Events

About Us

About 2ndQuadrant

What does 2ndQuadrant Mean?

News

Careers 

Team Profile

© 2ndQuadrant Ltd. All rights reserved. | Privacy Policy
  • Twitter
  • LinkedIn
  • Facebook
  • Youtube
  • Mail
How to use the Random Forest Machine Learning Model with 2UDA – PostgreSQL... Oracle to PostgreSQL: ROWNUM and ROWID
Scroll to top
×