Introduction
In the evolving landscape of database management, optimizing performance is paramount. For enterprises relying on PostgreSQL
, a robust monitoring tool can make all the difference. Using the extension pg_profile
, can be a game changer in the realm of PostgreSQL
performance tuning. Every millisecond counts, and for PostgreSQL users, navigating the complexities of performance tuning can be daunting. Monitoring and fine-tuning of slow query execution can increase the performance. In this blog , we will delve into the installation, configuration and generating the statistical report using pg_profile
.
pg_profile
pg_profile
is a extension designed specifically for monitoring and analyzing the performance of PostgreSQL databases. It helps database administrators identify and address performance bottlenecks, optimize queries, and fine-tune configurations to enhance the overall efficiency and responsiveness of PostgreSQL database systems. pg_profile is having some key features like query performance analysis, resource monitoring , index usage analysis and configuration parameter recommendations.
Installation
Here, we are installing version 4.6 of pg_profile
and unzip into a folder.
#Installing using wget
[root@localhost ~]# wget https://github.com/zubkov-andrei/pg_profile/releases/download/4.6/pg_profile--4.6.tar.gz
#Unzip into a folder
[root@localhost ~]# tar -xvf pg_profile--4.6.tar.gz --directory /usr/pgsql-16/share/extension
pg_profile--4.6.sql
pg_profile.control
pg_profile--4.5--4.6.sql
[root@localhost ~]# cd /usr/pgsql-16/share/extension
[root@localhost extension]# ls -ltr
total 1680
-rw-r--r--. 1 postgres postgres 193 Apr 27 20:10 pg_profile.control
-rw-r--r--. 1 postgres postgres 1073167 Apr 27 20:10 pg_profile--4.6.sql
-rw-r--r--. 1 postgres postgres 635910 Apr 27 20:10 pg_profile--4.5--4.6.sql
Create extension
pg_profile
extension depends on extensions plpgsql
and dblink
. The only mandatory requirement for server cluster is the ability to connect from pg_profile
database using provided server connection string. All other requirements are optional, but they can improve completeness of gathered statistics. To check the existence of this extensions ,
[root@localhost extension]# yum install postgresql16-contrib
#Login in to psql and create extensions
postgres=# create extension dblink;
CREATE EXTENSION
postgres=# create extension pg_stat_statements;
CREATE EXTENSION
postgres=# create extension pg_profile;
CREATE EXTENSION
postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+------------------------------------------------------------------------
dblink | 1.2 | public | connect to other PostgreSQL databases from within a database
pg_profile | 4.6 | public | PostgreSQL load profile repository and report builder
pg_stat_statements | 1.10 | public | track planning and execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
Configuration
We need to update few parameters in postgresql.conf
file .
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 20000
pg_stat_statements.track = top
#Restart the server
To know the objects in pg_profile
, use the command \dx+ pg_profile;
. It would display all the objects in the extension, including functions, tables and views.
Generating the statistical report
To generate the statistical report , we need to know the list of servers in pg_profile.
#To know the servers list in pg_profile
postgres=# select * from show_servers() ;
server_name | connstr | enabled | max_sample_age | description
-------------+---------------------------+---------+----------------+-------------
local | dbname=postgres port=5432 | t | |
(1 row)
As a default , there is a local server. In order the generate the report , we need to take samples at regular intervals. We can then generate a report in-between those samples.
# To take samples
postgres=# select * from take_sample();
server | result | elapsed
--------+--------+-------------
local | OK | 00:00:01.38
(1 row)
#To view samples taken
postgres=# select * from show_samples();
sample | sample_time | sizes_collected | dbstats_reset | clustats_reset | archstats_reset
--------+---------------------------+-----------------+---------------+----------------+-----------------
1 | 2024-05-17 00:38:59+05:30 | t | | |
2 | 2024-05-17 00:40:01+05:30 | t | | |
3 | 2024-05-17 00:40:06+05:30 | t | | |
4 | 2024-05-17 00:43:51+05:30 | t | | |
(4 rows)
Here , We have taken four samples . We can generate the report in between these samples. For example , between 1-2,1-3 ,1-4 ,2-3 ,2-4 & 3-4 .
#To generate a report in html
[postgres@localhost ~]$ psql -Aqtc "SELECT get_report('local',1,2)" -o Report.html
[postgres@localhost ~]$ ls -ltr
total 348
drwx------. 4 postgres postgres 51 May 15 07:25 16
-rw-r--r--. 1 postgres postgres 350993 May 17 00:56 Report.html
On viewing the generated html report, we can find SQL query statistics ,schema object statistics ,user function statistics and vacuum related statistics.
Summary
To summarize – in the current landscape of database management , it is important to monitor and optimize the database performance. In this post, we have introduced you to pg_profile, a game-changing tool tailored specifically for PostgreSQL performance monitoring. We’ve delved into its features, installation process, and seamless integration with existing PostgreSQL infrastructures, setting the stage for an immersive exploration of its capabilities.
Thank you and Stay tuned for more…
Leave a Reply