One Tuple, Many Steps : Inside PostgreSQL Logical Replication

Logical replication is one of the most fascinating internals of PostgreSQL. Unlike physical replication, which works at the block level, logical replication understands data changes rows being inserted, updated, or deleted.

In this blog, we’ll follow the life of a single tuple as it travels through PostgreSQL’s logical replication pipeline from the moment it changes on the publisher to the moment it’s applied on the subscriber.

A Tuple Changes on the Publisher

Everything starts with a simple SQL statement:

#For exampleINSERT INTO orders VALUES (101, 'paid', now());

At this moment:

  • PostgreSQL modifies the table heap
  • A new tuple version is created
  • The transaction is still not committed

But PostgreSQL never forgets changes. Every modification is recorded in the Write-Ahead Log (WAL).

WAL: Recording the Change

WAL is PostgreSQL’s source of truth. When a tuple is inserted, updated, or deleted:

  • A WAL record is generated
  • The record contains logical information (for logical replication) in addition to physical details
  • The change is associated with a transaction ID (XID)

For logical replication, PostgreSQL ensures that WAL contains enough information to reconstruct row-level changes, such as:

  • Relation OID
  • Column values (or replica identity)
  • Transaction boundaries

At this stage, WAL is just a sequential log with no replication logic yet.

Replication Slot: Holding the Line

Logical replication uses logical replication slots. Why slots matter:

  • They prevent WAL from being recycled too early
  • They guarantee that no change is lost, even if the subscriber is slow or disconnected

Each logical slot tracks:

  • The confirmed flush LSN
  • How far the subscriber has safely consumed WAL

Think of the replication slot as a bookmark that says:

Don’t delete WAL older than this point—I still need it.

Walsender: Streaming Changes

On the publisher side, a walsender process is started for each logical subscriber.
The walsender:

  • Reads WAL starting from the slot’s restart LSN
  • Sends WAL records over the replication connection
  • Does not apply any logic itself

But raw WAL is not yet usable for logical replication. That’s where logical decoding comes in.

Logical Decoding & the Reorder Buffer

Logical decoding converts WAL records into logical changes. The challenge is WAL written incrementally, but logical replication needs transactional consistency.

Enter the reorder buffer

The reorder buffer:

  • Collects WAL records per transaction
  • Reorders them if needed
  • Waits until the transaction commits
  • Emits changes only after commit

This ensures:

  • No partial transactions are sent
  • Commit order is preserved
  • Rollbacks are ignored

Only committed tuple changes move forward.

pgoutput Plugin: Shaping the Message

Logical decoding uses an output plugin. For built-in logical replication, this is pgoutput. It will :

  • Translates decoded changes into a protocol that the subscriber understands
  • Sends metadata (relations, columns, replica identity)
  • Streams row-level changes (INSERT/UPDATE/DELETE)
  • Preserves transactional boundaries

At this point, the tuple is no longer just WAL—it’s a logical replication message.

WAL Receiver: Receiving Changes

On the subscriber side, a walreceiver process:

  • Maintains a replication connection
  • Receives logical messages from the publisher
  • Writes them into the local replication stream

The subscriber acknowledges:

  • Received LSN
  • Flushed LSN

This feedback allows the publisher to:

  • Advance the replication slot

Recycle old WAL safely

Apply Worker: Rebuilding the Tuple

Now comes the final transformation. The apply worker:

  • Reads logical replication messages
  • Maps publisher relations to subscriber relations
  • Reconstructs SQL-level changes
  • Applies them inside transactions

For example:

  • INSERT → INSERT on subscriber
  • UPDATE → UPDATE using replica identity
  • DELETE → DELETE using replica identity

All changes are applied transactionally, preserving:

  • Commit order
  • Atomicity
  • Consistency

To the subscriber, it feels like the change happened locally.

Table Sync Worker: Catching Up New Tables

When a new table is added to a subscription:

  • A table sync worker performs an initial data copy
  • It takes a consistent snapshot of the publisher
  • Copies existing rows
  • Then hands over control to the apply worker for ongoing changes

This ensures:

  • No data gaps
  • No duplicate rows
  • Smooth transition from copy to replication

End of the Journey (Until the Next Change)

Once the apply worker commits:

  • The tuple officially exists on the subscriber
  • Feedback is sent upstream
  • The replication slot advances
  • WAL cleanup continues

And the cycle repeats for the next tuple.

Conclusion

Logical replication in PostgreSQL is a masterclass in distributed data consistency. What appears as a simple INSERT or UPDATE on the publisher triggers an intricate dance of components — each with a specific role in ensuring that data arrives accurately, in order, and without loss.

The life of a tuple in logical replication is a carefully orchestrated journey. From WAL’s immutable record-keeping to the reorder buffer’s transactional guarantees, from replication slots preventing premature cleanup to apply workers faithfully reconstructing changes—every piece works in concert to deliver reliable, row-level replication.

Understanding this journey helps you:

  • Troubleshoot replication lag by identifying bottlenecks in the pipeline
  • Design better schemas with appropriate replica identities
  • Monitor effectively by knowing which LSN positions matter
  • Scale confidently by understanding the resource implications of each stage

The beauty of logical replication lies not just in what it accomplishes, but in how PostgreSQL orchestrates dozens of moving parts to make it feel effortless. The next time you see a replicated tuple on your subscriber, you’ll know the remarkable journey it took to get there.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top