Understanding PostgreSQL Parameters: Tuning Memory Parameters

Introduction:

I’m sure you’d have read my earlier blog post on this, the Understanding PostgreSQL Parameters series.
These parameters primarily relate to how PostgreSQL utilizes the server’s memory to efficiently handle data storage and processing operations.

The most important memory-related parameters we’re going to focus on today are:

  • shared_buffers
  • temp_buffers
  • work_mem
  • maintenance_work_mem
  • logical_decoding_work_mem
  • huge_pages

These settings control how PostgreSQL manages in-memory data structures during operations such as caching, sorting, temporary table usage, maintenance tasks, and leveraging large memory pages for performance. Proper tuning of these parameters is critical for ensuring optimal database performance, especially under heavy workloads.

Why do we need to tune memory allocation parameters in PostgreSQL?

Tuning memory allocation parameters is essential for maximizing PostgreSQL performance and maintaining system stability. These parameters control how efficiently PostgreSQL can manage query execution, caching, and background operations. Proper tuning helps achieve the following benefits:

Boosts Query Performance:

Allocating sufficient memory for operations such as sorting, hashing, joins, and temporary tables (via work_mem and temp_buffers) enables PostgreSQL to process data in-memory, significantly faster than disk-based temporary storage.

Reduces Disk I/O and Prevents Spilling:

When memory is insufficient, PostgreSQL resorts to disk for intermediate data during large operations. This increases I/O load and slows down performance. Parameters like logical_decoding_work_mem and maintenance_work_mem help prevent unnecessary disk usage by keeping more data in RAM during logical replication and maintenance tasks.

Improves System Stability and Throughput:

Proper use of memory parameters like shared_buffers allows PostgreSQL to cache frequently accessed data, minimizing reads from disk and improving response time. However, over-allocating memory can lead to system-level memory pressure, swapping, or even crashes, and so balance is key.

Prevents Disk Space Issues:

If PostgreSQL consistently spills operations to disk due to poor memory configuration, it may eventually fill up available disk space and especially under heavy workloads. This can result in query failures, replication lag, or even database outages.

shared_buffers :

The shared_buffers parameter in PostgreSQL controls the amount of memory dedicated to caching data pages in shared memory. This memory is used by all background processes and user connections, making it faster to read/write data compared to fetching directly from disk. PostgreSQL locks this portion of memory for the entire lifecycle of the database instance.

Show the current value :
postgres=# SHOW shared_buffers ;
 shared_buffers 
----------------
 128MB
(1 row)
How to set shared_buffers

You can modify the parameter using the ALTER SYSTEM command:

postgres=# ALTER SYSTEM SET shared_buffers TO '16GB';
ALTER SYSTEM

To apply the changes to the PostgreSQL server, restart it.

Verifying the Change

After the restart, re-check the parameter:

postgres=# SHOW shared_buffers ;
 shared_buffers 
----------------
 16GB
(1 row)

Best Practices and Recommendations:

It is generally recommended to set shared_buffers to 25% to 40% of total system memory.

Example: For a server with 64 GB RAM, a good value would be around 16 GB.

Impact of Tuning shared_buffers:

Positive: Increasing shared_buffers allows PostgreSQL to cache more data in memory, reducing disk I/O and improving performance.

Caution: Setting this value too high (e.g., more than 50 to 70% of total RAM) can result in diminishing returns or memory contention with the operating system cache, leading to performance degradation.

work_mem:

The work_mem parameter defines the amount of memory available to internal operations such as sorts and hash tables before PostgreSQL starts writing to temporary disk files. It plays a crucial role in optimizing query performance, especially for operations like ORDER BY, DISTINCT, MERGE JOIN, HASH JOIN, and hash-based aggregation. Each operation gets its work_mem, meaning, complex queries or multiple connections may collectively use significantly more memory.

Show the current value:
postgres=# SHOW work_mem ;
 work_mem 
----------
 4MB
(1 row)
How to set work_mem

You can modify the parameter using the ALTER SYSTEM command:

postgres=# ALTER SYSTEM SET work_mem TO '64MB';
ALTER SYSTEM

To apply the changes to the PostgreSQL server, reload it.

Verifying the Change

After reload, re-check the parameter:

postgres=# SHOW work_mem ;
 work_mem 
----------
 64MB
(1 row)
Best Practices and Recommendations:

Increase work_mem for queries with large sorts, hash joins, or aggregations to reduce temporary disk usage and improve performance.

Use caution: Since work_mem is allocated per operation; increasing it too much can lead to excessive memory usage during high-concurrency workloads.

Example: If a query includes 3 sorts and 2 hash joins, it may use up to 5 * work_mem for that session.

maintenance_work_mem :

maintenance_work_mem sets the maximum amount of memory that maintenance operations can use, such as a VACUUM, CREATE INDEX, ALTER TABLE ADD FOREIGN KEY Since these operations are infrequent and usually run one at a time per session, it’s generally safe to allocate more memory to this parameter than to work_mem.

Show the current value:

postgres=# SHOW maintenance_work_mem ;
 maintenance_work_mem 
----------------------
 64MB
(1 row)

How to set maintenance_work_mem

You can modify the parameter using the ALTER SYSTEM command:

postgres=# ALTER SYSTEM SET maintenance_work_mem TO '2GB';
ALTER SYSTEM

To apply the changes to the PostgreSQL server, reload it.

Verifying the Change

After reload, re-check the parameter:

postgres=# SHOW maintenance_work_mem ;
 maintenance_work_mem 
----------------------
 2GB
(1 row)
Best Practice Recommendation:

Set maintenance_work_mem higher than work_mem to improve the performance of maintenance operations.

A general recommendation:

maintenance_work_mem = Total RAM * 0.05

temp_buffers :

temp_buffers defines the amount of memory allocated for temporary tables in each session. These are used during query execution when temporary tables are created.

Show the current Value:

postgres=# SHOW temp_buffers ;
 temp_buffers 
--------------
 8MB
(1 row)

How to set temp_buffers

You can modify the parameter using the ALTER SYSTEM command:

postgres=# ALTER SYSTEM SET temp_buffers TO '32MB';
ALTER SYSTEM

To apply the changes to the PostgreSQL server, reload it.

Verifying the Change

After reload, re-check the parameter:

postgres=# SHOW temp_buffers ;
 temp_buffers 
--------------
 32MB
(1 row)

Best Practice Recommendation:

Allocate around 32MB or more, depending on your workload. Useful when you expect queries to create large temporary tables.

huge_pages :

huge_pages parameter introduced in PostgreSQL 9.4 to control the use of huge memory pages for the main shared memory area. Huge pages are a feature provided by modern operating systems that allow the use of larger memory pages (typically 2MB or 1GB) instead of the standard 4KB pages. This reduces the overhead of managing a large number of small pages and improves memory access performance.

Using huge_pages can result in:

Reduced CPU overhead during memory access

Improved performance for memory-intensive PostgreSQL workloads

Lower TLB (Translation Lookaside Buffer) miss rates.

Valid Settings:

try – (default) Attempts to use huge pages; falls back to regular pages if unavailable.

on – Forces huge page usage; PostgreSQL won’t start if huge pages can’t be allocated.

off – Disables use of huge pages.

Important Notes:

To apply changes to this parameter, a PostgreSQL server restart is required.

The operating system must support huge pages, and they must be pre-allocated or enabled via system settings (e.g., vm.nr_hugepages on Linux).

Huge pages are beneficial mainly for systems with large shared memory allocations, such as those with high shared_buffers.

Recommendation:

Use huge_pages = try for balanced performance and stability.

logical_decoding_work_mem :

logical_decoding_work_mem parameter introduced in PostgreSQL 13. It defines the maximum amount of memory (per replication slot) that can be used for logical decoding before changes are temporarily written (spilled) to disk.

Logical decoding is the foundation of features like logical replication, change data capture (CDC), and replication plugins. Efficient memory management during decoding is critical to minimize I/O and improve replication performance.

Show the current Value:
postgres=# SHOW logical_decoding_work_mem ;
 logical_decoding_work_mem 
---------------------------
 64MB
(1 row)

Depending on the amount of publication data and subscription lift and shift, you may adjust this parameter.

Best Practices:

The default value is typically sufficient for light to moderate transaction volumes in replication scenarios.

Increase the value if you notice frequent disk spills or replication lag during large or long-running transactions.

Tuning this parameter helps to retain decoded changes in memory, thereby reducing disk I/O and improving performance in high-throughput or low-latency replication environments.

Example :

In a replication slot handling high transaction volumes, if logical_decoding_work_mem it is set too low, PostgreSQL will frequently spill changes to disk, impacting replication performance. Setting a higher value, like 128MB, allows more changes to be processed in memory, improving throughput.

Conclusion :

PostgreSQL memory parameters are essential for achieving consistent, high-performance database behavior. Parameters like shared_buffers, work_mem, maintenance_work_mem directly affect query execution and maintenance efficiency, while temp_buffers, logical_decoding_work_mem params are key for temporary and replication-related operations. Additionally, enabling huge_pages can boost performance in memory-intensive environments. However, memory tuning must always be balanced with available system resources to prevent over-allocation and ensure operating system stability. Regular monitoring and testing should guide tuning decisions, adapting to workload changes over time.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top