PG18 Hacktober : 31 Days of New Features : Named Prepared Statements

Welcome to the 24th blog post of PG18 Hacktober!

The release of PostgreSQL 18 introduced a valuable enhancement for developers who regularly profile and optimize query execution, named prepared statement support directly in psql. This blog walks through what named prepared statements are, their performance impact, the new PostgreSQL 18 capabilities, and how to practically test and measure them.

What Are Named Prepared Statements?

A prepared statement in PostgreSQL is a precompiled SQL execution plan stored on the server for reuse. Named prepared statements persist across multiple executions in the same session and are explicitly managed by the developer using the SQL PREPARE command.

Example:

PREPARE get_rich_accounts(numeric) AS
SELECT id, customer, balance
FROM accounts
WHERE balance > $1
ORDER BY balance DESC
LIMIT 5;
EXECUTE get_rich_accounts(5000);

Here, PostgreSQL stores the parsed and planned query server-side, and executes it efficiently for successive parameter values.

Anonymous prepared statements, in contrast, are created implicitly by client libraries. They last only for a single protocol message sequence and lack visibility in system views such as pg_prepared_statements.

What does the documentation say:

psql named prepared statements
psql can parse (\parse), bind (\bind_named), and close (\close_prepared) named prepared statements.

PostgreSQL 18 adds first-class support for named prepared statements in the psql client, an area previously limited to backend drivers.

  • \parse: defines a named prepared statement
  • \bind_named: binds parameters and executes it
  • \close: deallocates the statement

Example session:

\parse q SELECT $2 || $1 AS str
\bind_named q 'World!' 'Hello, ' \g
\close q

This demonstrates how the terminal now directly interacts with the extended query protocol, enabling developers to craft reusable queries interactively, debug plan caching behavior, and control statement lifecycles without embedded SQL code.

Inspecting pg_prepared_statements

The pg_prepared_statements view lists all prepared statements visible within the current backend session:

SELECT name, statement, prepare_time,
       parameter_types, generic_plans, custom_plans
FROM pg_prepared_statements;
namestatementprepare_timegeneric_planscustom_plans
get_rich_accountsSELECT id, customer, balance FROM accounts WHERE balance > $12025‑10‑24 11:00:1310

The columns show how often PostgreSQL chooses generic or custom plans — allowing you to verify whether the planner optimizes queries based on parameter variability.

Performance behavior and plan strategy

PostgreSQL creates custom plans initially for prepared statements and may switch to a generic plan after several executions when it deems the fixed plan sufficiently efficient.

Performance gains include:

  • Skipping repeated parse and plan overheads
  • Reduced network traffic since parameters are transmitted separately
  • Improved throughput for repetitive query workloads such as OLTP systems

However, prepared statements remain session-bound — they vanish when connections close. This adds complications for connection pooling:

  • Session pooling: Prepared statements persist across transactions.
  • Transaction pooling: Each transaction may run on a different backend, so statements are not retained.

PgBouncer 1.21+ mitigates this by tracking prepared statements in transaction mode.

Testing 

A complete workflow setup tested using a PostgreSQL 18 with a modest dataset.

Setup:

postgres=# CREATE DATABASE prep_test;
CREATE DATABASE
postgres=# \c prep_test
You are now connected to database "prep_test" as user "postgres".
prep_test=# CREATE TABLE accounts (
    id serial PRIMARY KEY,
    customer text,
    balance numeric
);
CREATE TABLE
prep_test=# INSERT INTO accounts (customer, balance)
SELECT 'user_' || g, (random() * 10000)::numeric FROM generate_series(1, 100000) g;
INSERT 0 100000

Step 1: Create and execute prepared statements

prep_test=# PREPARE high_value(numeric) AS
SELECT * FROM accounts WHERE balance > $1;
PREPARE
prep_test=# EXECUTE high_value(8000);
id   |  customer  |     balance
-------+------------+------------------
     4 | user_4     | 9047.46340698785
    16 | user_16    | 8248.72327362598
    20 | user_20    | 9400.26809614429
    21 | user_21    | 9262.91281328994
    27 | user_27    | 9960.53372188808
    29 | user_29    | 9147.66712833384
    32 | user_32    | 8828.24616554666
    33 | user_33    | 9104.53952454853
    37 | user_37    | 8205.11925460575
    43 | user_43    | 9756.53421576248
    45 | user_45    | 8753.14320982015
    46 | user_46    | 9841.93396217217
    50 | user_50    | 9211.58252074313
    57 | user_57    | 9412.89994230521
    58 | user_58    | 9652.31157393867
    60 | user_60    | 9431.69456901297
    71 | user_71    | 9217.87839857236

Step 2: Explore new psql protocol interaction

prep_test=# SELECT $1::int + $2::int \parse demo;

prep_test=# \bind_named demo 10 20 \g
ERROR:  prepared statement "demo" does not exist
prep_test=# SELECT $1::int + $2::int \parse demo

prep_test=# \bind_named demo 10 20 \g
 ?column?
----------
       30
(1 row)
prep_test=# \close demo

Step 3: Monitor plans

prep_test=# SET plan_cache_mode = force_generic_plan;
SET
prep_test=# EXPLAIN EXECUTE high_value(8000);
                           QUERY PLAN
----------------------------------------------------------------
 Seq Scan on accounts  (cost=0.00..1985.00 rows=33333 width=25)
   Filter: (balance > $1)
(2 rows)

Step 4: Benchmark
Use pgbench for comparative testing :

postgres@node:~$ pgbench -c 10 -T 30 -M simple prep_test
pgbench (18.0 (Ubuntu 18.0-1.pgdg24.04+3))
starting vacuum…end.
transaction type:
scaling factor: 1
query mode: simple
number of clients: 10
number of threads: 1
maximum number of tries: 1
duration: 30 s
number of transactions actually processed: 7365
number of failed transactions: 0 (0.000%)
latency average = 40.677 ms
initial connection time = 81.467 ms
tps = 245.837543 (without initial connection time)
postgres@node:~$ pgbench -c 10 -T 30 -M prepared prep_test
pgbench (18.0 (Ubuntu 18.0-1.pgdg24.04+3))
starting vacuum…end.
transaction type:
scaling factor: 1
query mode: prepared
number of clients: 10
number of threads: 1
maximum number of tries: 1
duration: 30 s
number of transactions actually processed: 7426
number of failed transactions: 0 (0.000%)
latency average = 40.350 ms
initial connection time = 79.134 ms
tps = 347.831202 (without initial connection time)

You will typically observe a higher TPS (transactions per second) using prepared mode, particularly on compute-heavy queries.

When to use?

Named prepared statements are ideal when:

  • A session executes many variations of the same query
  • Query planning overhead is measurable (joins, aggregates, or views)
  • You need consistent performance across parameterized executions

They are less useful when:

  • Queries run only once or vary significantly in structure
  • Transaction pooling resets connections frequently

Conclusion

PostgreSQL 18 refined the developer experience by offering interactive prepared statement control directly in psql. Named prepared statements deliver measurable performance benefits, better session introspection using pg_prepared_statements, and improved interoperability with clients and connection poolers like PgBouncer.

For seasoned database developers, this release closes the gap between backend protocol efficiency and hands-on usability from the command line — turning prepared statement testing and profiling into a straightforward, scriptable part of performance tuning workflows.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top