PG18 Hacktober: 31 Days of New Features : Supercharging psql with Pipeline Query Support

Welcome to a brand-new segment of PG18HacktoberClient Applications, where we explore the exciting enhancements and hidden gems that make interacting with PostgreSQL 18 smoother, faster and more powerful than ever.

PostgreSQL 18 introduces one of the most practical usability enhancements for database developers and administrators — the introduction of pipeline query support in psql. This feature brings the powerful pipelining capability of libpq (first added in PostgreSQL14) directly to the psql command-line client, dramatically improving query execution efficiency in high-latency environments.

Traditionally, when using psql, each query is sent to the PostgreSQL server, the execution pauses until the result is returned. If you execute multiple small queries in sequence (for example, updates or inserts in a script), each round trip introduces latency overhead. Pipeline mode fixes that by allowing multiple queries to be sent at once, letting the client, network and server work concurrently.

In PostgreSQL 18, psql now natively supports this through a set of built-in commands. This means you can use pipelining without writing C programs or using external drivers.

How Pipeline Mode Works

Pipeline mode batches several queries together before waiting for results. While the server is busy processing one query, the next query can already be transmitted, and partial results can flow back simultaneously.​ In simple terms:

  1. Without pipelining: Each query waits for a full server response before sending the next.
  2. With pipelining: Multiple queries are serialized and sent together; responses are processed asynchronously.

This parallelism improves throughput by reducing network wait time. It is particularly effective when working over WANs, VPNs, or cloud connections where latency per round trip is noticeable.

What does the Postgres 18 documentation say?

  • In pipeline mode, statements are sent to the server without waiting for the results of previous statements.
  • All queries executed while a pipeline is ongoing use the extended query protocol. Queries are appended to the pipeline when ending with a semicolon.
  • The meta-commands \bind\bind_named\close_prepared or \parse can be used in an ongoing pipeline. Other meta-commands like \g\gx or \gdesc are not allowed in pipeline mode.
  • When pipeline mode is active, a dedicated prompt variable %P is available to report the pipeline status.
    •  off when not in a pipeline.
    • on when in an ongoing pipeline
    • abort when in an aborted pipeline.
  • COPY is not supported while in pipeline mode.

Pipeline specific new commands in PostgreSQL 18

PostgreSQL 18 introduces seven pipeline-specific meta-commands that empower developers to manage batching, synchronization and result handling efficiently.

CommandsDescription
\startpipelineBegins a new pipeline session. Subsequent queries are queued.
\endpipelineEnds the pipeline, executing all queued queries.
\syncpipelineSends a sync message without ending the ongoing pipeline and flushing the send buffer
\sendpipeline Appends the current query buffer to the pipeline.
\flushrequestAppends a flush command to the pipeline, allowing to read results with \getresults without issuing a sync or ending the pipeline.
\flushManually push unsent data.
\getresults Automatically push unsent data to the server.

Quick performance test

Let’s do a quick performance test while using pipeline and not using pipeline.

  • Set the prompt
postgres=# \set PROMPT1 '%n@%/%R [pipeline:%P] %# '
postgres@postgres= [pipeline:off] # \startpipeline
postgres@postgres= [pipeline:on] # \endpipeline
postgres@postgres= [pipeline:off] # 
  • Adding the artifical latency 500ms(Optional).
  • Create a test script non_pipelinetest.sql to check the execution time.
BEGIN;
SELECT count(*) FROM pg_class;
SELECT count(*) FROM pg_stat_activity;
SELECT count(*) FROM pg_indexes;
SELECT count(*) FROM pg_locks;
COMMIT;
  • Check the execution time for Non-pipeline script.
[postgres@ip-172-31-11-103 ~]$ time psql -h localhost -f non_pipelinetest.sql
BEGIN
 count
-------
   444
(1 row)

 count
-------
     9
(1 row)

 count
-------
   132
(1 row)

 count
-------
    23
(1 row)

COMMIT

real    0m10.078s
user    0m0.035s
sys     0m0.020s
  • Create a test script pipelinetest.sql to check execution time.
\startpipeline
\ir non_pipelinetest.sql
\flushrequest
\getresults
\endpipeline
  • Check the execution time for Pipeline script.
[postgres@ip-172-31-11-103 ~]$ time psql -h localhost -f pipelinetest.sql
BEGIN
 count
-------
   444
(1 row)

 count
-------
     9
(1 row)

 count
-------
   132
(1 row)

 count
-------
    23
(1 row)

COMMIT

real    0m6.075s
user    0m0.038s
sys     0m0.018s

Comparing the test results of Non-pipeline and pipeline executions:

  • In the non-pipeline test (non_pipelinetest.sql), the total execution time was approximately 10.078 seconds. The queries executed sequentially, waiting for each to complete before sending the next.
  • In the pipeline test (pipelinetest.sql), using the \startpipeline and related pipeline commands to batch the queries, the total execution time was reduced to approximately 6.075 seconds.
  • This represents a performance improvement of about 66%, demonstrating that pipelining reduces the overall runtime by minimizing the network round trips and allowing multiple queries to be sent and processed concurrently.
  • The effect is most pronounced in workloads involving multiple small queries and/or high network latency. Even on localhost, pipelining can provide significant speedups by decreasing query overhead.

Advanced Pipelining with Synchronization

We can insert synchronization checkpoints using \syncpipeline to separate dependent query batches.

postgres@postgres= [pipeline:off] # \startpipeline
postgres@postgres= [pipeline:on] # INSERT INTO users (region, user_id, name) VALUES ('Region_2', 40001, 'user_40001');
postgres@postgres= [pipeline:on] # \syncpipeline
postgres@postgres= [pipeline:on] # UPDATE users SET name = 'updated_user_40001' WHERE user_id = 40001;
postgres@postgres= [pipeline:on] # \syncpipeline
postgres@postgres= [pipeline:on] # SELECT * FROM users WHERE user_id = 40001;
postgres@postgres= [pipeline:on] # \endpipeline
INSERT 0 1
UPDATE 1
    id    |  region  | user_id |        name
----------+----------+---------+--------------------
 10000001 | Region_2 |   40001 | updated_user_40001
(1 row)

postgres@postgres= [pipeline:off] #

What we have done?

  • The first batch inserts a new row, creating new state in the database.
  • Using \syncpipeline after the insert, ensures that the insert completes and commits on the server before sending the update.
  • The second batch updates the new row’s name.
  • Another \syncpipeline guarantees that the update completes before the final select.
  • The last batch retrieves the updated row to verify the changes were applied.
  • Each \syncpipeline acts as a checkpoint, allowing errors to be detected and processed incrementally.
  • This approach maintains transactional integrity while allowing pipelined grouping for performance benefits.

When to use \syncpipeline

  • Later queries depend on the results of earlier queries.
  • You need to checkpoint pipeline execution to handle errors gracefully.
  • You want to combine the performance benefits of pipelining with logical dependencies.

Transaction Semantics and Error Handling

Similar to multi-statement transactions (; – chained queries), starting a pipeline in psql implicitly begins a transaction. If any query fails within the pipeline, the commands since the last synchronization point are rolled back. This behavior helps maintain atomicity and simplifies error recovery.

postgres@test_db= [pipeline:off] # \dt+
Did not find any tables.
postgres@test_db= [pipeline:off] # \startpipeline
postgres@test_db= [pipeline:on] # create table test1 (id int);
postgres@test_db= [pipeline:on] # create table test1 (id int);
postgres@test_db= [pipeline:on] # \endpipeline
CREATE TABLE
ERROR:  relation "test1" already exists
postgres@test_db= [pipeline:off] # \dt+
Did not find any tables.

Here, we initiated a pipeline and intentionally included two identical CREATE statements to trigger an error before closing the pipeline. As a result, we can observe that even the first table which itself had no error was not created.

Let’s check by creating a synchronization point within the pipeline.

postgres@test_db= [pipeline:off] # \dt+
Did not find any tables.
postgres@test_db= [pipeline:off] # \startpipeline
postgres@test_db= [pipeline:on] # create table test1 (id int);
postgres@test_db= [pipeline:on] # \syncpipeline
postgres@test_db= [pipeline:on] # create table test1 (id int);
postgres@test_db= [pipeline:on] # \endpipeline
CREATE TABLE
ERROR:  relation "test1" already exists
postgres@test_db= [pipeline:off] # \dt+
                                     List of tables
 Schema | Name  | Type  |  Owner   | Persistence | Access method |  Size   | Description
--------+-------+-------+----------+-------------+---------------+---------+-------------
 public | test1 | table | postgres | permanent   | heap          | 0 bytes |
(1 row)

Here, we can see that the first table is created because of the \syncpipeline .

Practical Use Cases

Key benefits appear in workloads where:

  • Queries are executed over remote or latency-affected connections.
  • Applications or scripts issue many small, independent commands.
  • Developers batch analytical or administrative commands using scripts.

Summary

PostgreSQL 18 takes client-side efficiency to the next level by introducing pipeline query support in psql, allowing multiple queries to be sent and processed concurrently without waiting for individual responses. This feature, previously available only through libpq, is now accessible directly from the command line — improving performance, especially in high-latency environments.

The post explains how pipeline mode works, details the new pipeline-specific meta-commands like \startpipeline, \syncpipeline, and \endpipeline, and demonstrates a practical performance comparison that shows up to 66% faster execution versus traditional sequential queries.

It also covers advanced synchronization techniques using \syncpipeline for dependent queries, and highlights transaction behavior and error handling, showing how PostgreSQL maintains atomicity even in pipelined operations.

In essence, this enhancement makes scripting, automation, and high-latency query workloads significantly more efficient, marking a major usability leap for database developers and administrators in PostgreSQL 18.

What’s next in PG18Hacktober ?

Up next in our PG18Hacktober journey, we dive deeper into Client Applications with an exploration of Named Prepared Statements in psql.

PostgreSQL 18 enhances the psql experience further by introducing meta-commands that allow you to parse (\parse), bind (\bind_named), and close (\close_prepared) named prepared statements — all directly from the command line.

This powerful addition lets developers precompile SQL statements, reuse them efficiently across sessions, and gain finer control over query execution flow — boosting both performance and productivity in interactive and scripted workloads.

Stay tuned — the next post will show how these new commands make psql smarter, faster, and more script-friendly than ever!

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top