Monitor
Monitor existing PostgreSQL or RDS
Overview
Pigsty uses the modern observability stack for PostgreSQL monitoring:
- Grafana for metrics visualization and PostgreSQL datasource.
- Prometheus for PostgreSQL / Pgbouncer / Patroni / HAProxy / Node metrics
- Loki for PostgreSQL / Pgbouncer / Patroni / pgBackRest logs
- Battery-Include dashboards for PostgreSQL and everything else
Metrics
PostgreSQL’s metrics are defined by collector files: pg_exporter.yml. Prometheus record rules and alert evaluation will further process it: files/prometheus/rules/pgsql.yml
There are three identity labels: cls, ins, ip, which will be attached to all metrics & logs. node & haproxy will try to reuse the same identity to provide consistent metrics & logs.
{ cls: pg-meta, ins: pg-meta-1, ip: 10.10.10.10 }
{ cls: pg-meta, ins: pg-test-1, ip: 10.10.10.11 }
{ cls: pg-meta, ins: pg-test-2, ip: 10.10.10.12 }
{ cls: pg-meta, ins: pg-test-3, ip: 10.10.10.13 }Logs
PostgreSQL-related logs are collected by promtail and sent to Loki on infra nodes by default.
pg_log_dir: postgres log dir,/pg/log/postgresby defaultpgbouncer_log_dir: pgbouncer log dir,/pg/log/pgbouncerby defaultpatroni_log_dir: patroni log dir,/pg/log/patroniby defaultpgbackrest_log_dir: pgbackrest log dir,/pg/log/pgbackrestby default
Targets
Prometheus monitoring targets are defined in static files under /etc/prometheus/targets/pgsql/. Each instance will have a corresponding file. Take pg-meta-1 as an example:
# pg-meta-1 [primary] @ 10.10.10.10
- labels: { cls: pg-meta, ins: pg-meta-1, ip: 10.10.10.10 }
targets:
- 10.10.10.10:9630 # <--- pg_exporter for PostgreSQL metrics
- 10.10.10.10:9631 # <--- pg_exporter for Pgbouncer metrics
- 10.10.10.10:8008 # <--- patroni metricsWhen the global flag patroni_ssl_enabled is set, the patroni target will be managed as /etc/prometheus/targets/patroni/<ins>.yml because it requires a different scrape endpoint (https).
Prometheus monitoring target will be removed when a cluster is removed by bin/pgsql-rm or pgsql-rm.yml. You can use playbook subtasks, or remove them manually:
bin/pgmon-rm <ins> # remove prometheus targets from all infra nodesRemote RDS targets are managed as /etc/prometheus/targets/pgrds/<cls>.yml. It will be created by the pgsql-monitor.yml playbook or bin/pgmon-add script.
Monitor Mode
There are three ways to monitor PostgreSQL instances in Pigsty:
| Item \ Level | L1 | L2 | L3 |
|---|---|---|---|
| Name | Remote Database Service | Existing Deployment | Fully Managed Deployment |
| Abbr | RDS | MANAGED | FULL |
| Scenes | connect string URL only | ssh-sudo-able | Instances created by Pigsty |
| PGCAT Functionality | ✅ Full Availability | ✅ Full Availability | ✅ Full Availability |
| PGSQL Functionality | ✅ PG metrics only | ✅ PG and node metrics | ✅ Full Support |
| Connection Pool Metrics | ❌ Not available | ⚠️ Optional | ✅ Pre-Configured |
| Load Balancer Metrics | ❌ Not available | ⚠️ Optional | ✅ Pre-Configured |
| PGLOG Functionality | ❌ Not Available | ⚠️ Optional | ⚠️ Optional |
| PG Exporter | ⚠️ On infra nodes | ✅ On DB nodes | ✅ On DB nodes |
| Node Exporter | ❌ Not Deployed | ✅ On DB nodes | ✅ On DB nodes |
| Intrusion into DB nodes | ✅ Non-Intrusive | ⚠️ Installing Exporter | ⚠️ Fully Managed by Pigsty |
| Instance Already Exists | ✅ Yes | ✅ Yes | ⚠️ Created by Pigsty |
| Monitoring users and views | ⚠️Manually Setup | ⚠️Manually Setup | ✅ Auto configured |
| Deployment Usage Playbook | bin/pgmon-add <cls> | subtasks of pgsql.ym/node.yml | pgsql.yml |
| Required Privileges | connectable PGURL from infra nodes | DB node ssh and sudo privileges | DB node ssh and sudo privileges |
| Function Overview | PGCAT + PGRDS | Most Functionality | Full Functionality |
Monitor Existing Cluster
Suppose the target DB node can be managed by Pigsty (accessible via ssh and sudo is available). In that case, you can use the pg_exporter task in the pgsql.yml playbook to deploy the monitoring component PG Exporter on the target node in the same manner as a standard deployment.
You can also deploy the connection pool and its monitoring on existing instance nodes using the pgbouncer and pgbouncer_exporter tasks from the same playbook. Additionally, you can deploy host monitoring, load balancing, and log collection components using the node_exporter, haproxy, and promtail tasks from the node.yml playbook, achieving a similar user experience with the native Pigsty cluster.
The definition method for existing clusters is very similar to the normal clusters managed by Pigsty. Selectively run certain tasks from the pgsql.yml playbook instead of running the entire playbook.
./node.yml -l <cls> -t node_repo,node_pkg # Add YUM sources for INFRA nodes on host nodes and install packages.
./node.yml -l <cls> -t node_exporter,node_register # Configure host monitoring and add to Prometheus.
./node.yml -l <cls> -t promtail # Configure host log collection and send to Loki.
./pgsql.yml -l <cls> -t pg_exporter,pg_register # Configure PostgreSQL monitoring and register with Prometheus/Grafana.Since the target database cluster already exists, you must manually setup monitoring users, schemas, and extensions on the target database cluster.
Monitor RDS
If you can only access the target database via PGURL (database connection string), you can refer to the instructions here for configuration. In this mode, Pigsty deploys the corresponding PG Exporter on the INFRA node to fetch metrics from the remote database, as shown below:
------ infra ------
| |
| prometheus | v---- pg-foo-1 ----v
| ^ | metrics | ^ |
| pg_exporter <-|------------|---- postgres |
| (port: 20001) | | 10.10.10.10:5432 |
| ^ | ^------------------^
| ^ | ^
| ^ | v---- pg-foo-2 ----v
| ^ | metrics | ^ |
| pg_exporter <-|------------|---- postgres |
| (port: 20002) | | 10.10.10.11:5433 |
------------------- ^------------------^The monitoring system will no longer have host/pooler/load balancer metrics. But the PostgreSQL metrics & catalog info are still available. Pigsty has two dedicated dashboards for that: PGRDS Cluster and PGRDS Instance. Overview and Database level dashboards are reused. Since Pigsty cannot manage your RDS, you have to setup monitor on the target database in advance.
Below, we use a sandbox environment as an example: now we assume that the pg-meta cluster is an RDS instance pg-foo-1 to be monitored, and the pg-test cluster is an RDS cluster pg-bar to be monitored:
-
Create monitoring schemas, users, and permissions on the target. Refer to Monitor Setup for details.
-
Declare the cluster in the configuration list. For example, suppose we want to monitor the “remote”
pg-meta&pg-testclusters:
infra: # Infra cluster for proxies, monitoring, alerts, etc.
hosts: { 10.10.10.10: { infra_seq: 1 } }
vars: # Install pg_exporter on 'infra' group for remote postgres RDS
pg_exporters: # List all remote instances here, assign a unique unused local port for k
20001: { pg_cluster: pg-foo, pg_seq: 1, pg_host: 10.10.10.10 , pg_databases: [{ name: meta }] } # Register meta database as Grafana data source
20002: { pg_cluster: pg-bar, pg_seq: 1, pg_host: 10.10.10.11 , pg_port: 5432 } # Several different connection string concatenation methods
20003: { pg_cluster: pg-bar, pg_seq: 2, pg_host: 10.10.10.12 , pg_exporter_url: 'postgres://dbuser_monitor:[email protected]:5432/postgres?sslmode=disable'}
20004: { pg_cluster: pg-bar, pg_seq: 3, pg_host: 10.10.10.13 , pg_monitor_username: dbuser_monitor, pg_monitor_password: DBUser.Monitor }The databases listed in the pg_databases field will be registered in Grafana as a PostgreSQL data source, providing data support for the PGCAT monitoring panel. If you don’t want to use PGCAT and register the database in Grafana, set pg_databases to an empty array or leave it blank.