Migration from Supabase to RDS : User Guide

In today’s rapidly evolving tech landscape, organizations often find themselves needing to adapt their infrastructure to better meet their needs. Whether driven by requirements for greater control, performance, scalability, or compliance, migrating from a managed service like Supabase to an on-premises or managed database service such as AWS RDS is a significant undertaking. Even though Supabase is a powerful, open-source Backend-as-a-Service platform built on PostgreSQL, and provides a range of features including real-time capabilities, authentication, and auto-scaling, there might be several reasons to migrate away from Supabase. In this blog, let’s explore the step by step approach for executing the PostgreSQL commands for a smooth migration process.

Migrating from one environment to another involves various strategies, each suited to different constraints and requirements. The approach you choose will depend on factors such as system limitations, acceptable downtime, and specific migration needs. Considering the less downtime, we are going for the logical replication strategy to migrate.

Step1 : Export Users

Before setting up the replication, let’s export Users from Supabase to RDS using the pg_dumpall

#Using pg_dumpall to get the users.
pg_dumpall --globals-only \
  -U postgres -h <your_supabase_hostname> \
  2>stderr_globals.txt > supabase_globals.sql

#To restore them in RDS
psql -f supabase_globals.sql -h <your_RDS_hostname> -U postgres -d postgres -W

Since Supabase is a managed service, it includes its own default users, which will be present in the exported supabase_globals.sql file. These default users may not be compatible with AWS RDS, which is also a managed service. You can either remove these users from the SQL file before restoring it, or delete them from RDS after the restoration process is complete.

Step2 : Export Schema

After the users are imported in RDS, we have to export the schema as well

# Using pg_dump to export the schema
pg_dump --schema-only \
  -U postgres -h <your_supabase_hostname> \
--schema=<schema_name> --schema=<schema_name> \
  2>stderr_schema.txt > postgres_schema_only.sql

# To restore them in RDS
psql -f postgres_schema_only.sql -h <your_RDS_hostname> -U postgres -d postgres -W

Export the necessary schemas and restore them similar to what was done for the Users. You may ignore some default schemas, if they wouldn’t be needed.

Step3 : Setup for logical replication

On Source

  • Create publication for the tables in required schemas.
#Create publication for tables in schema_1 and schema_2. Add the schema names as per your environment.
CREATE PUBLICATION <publication_name> FOR TABLES IN SCHEMA schema_1,schema_2;
  • Checking the replication slot status after creating subscription
#To check the status of replication_slot
select * from pg_replication_slots;

Make sure that replication is active.

On Target

  • Create subscription for the publication
CREATE SUBSCRIPTION <subscription_name>
CONNECTION 'dbname=postgres host=<your_supabase_hostname> user=postgres password=<postgres_password>'
PUBLICATION <publication_name>
WITH (copy_data = true);

Replication_slot will be automatically created

  • Check the subscription status
select * from pg_subscription_rel;

You can get the information about the relationship between subscriptions and the tables they replicate.

Step4 : Validate the data migration

Ensure that all tables, indexes, and constraints have been imported correctly. Compare the data in your AWS RDS instance with the original data in Supabase to ensure accuracy. You can use DBeaver tool for validation.

Step5 : Migrate Sequences

There are a few limitations with the replication of sequence values with this approach. To overcome that , you can use the SELECT setval feature in PostgreSQL to set the value of a sequence. This is particularly useful when you want to ensure that a sequence is synchronised with the maximum value in a table column, which is often necessary after data has been imported or modified.
Note: You may wanna create a dynamic SQL (as following) to be generated beforehand and execute just before the cutover.

SELECT setval('<sequence_name>', (SELECT max(<column_name>) FROM <table_name>) , false);

However, we need to the stop the connections to the source while setting this sequence value to make sure there is no data push from application to source , source to target because the max sequence value may change if there is any data incoming . However, we need to have some downtime to map the application to the target (i.e., on-premise or a different cloud) database. And also , we need to create such setval statements for each and every sequence existing in the database.

Step6 : Cutover

After swapping the connection strings in your application from the old database to the new one and redirecting all traffic , we need to cutover the replication with the old database.

On New database (Target)

We need to disable the subscription and drop the subscription

ALTER SUBSCRIPTION <subscription_name> DISABLE;

DROP SUBSCRIPTION <subscription_name>;

The replication_slot will automatically dropped once the subscription is dropped.

On Old database(Source)

We need to drop the publication

DROP PUBLICATION <publication_name>;

Conclusion

Migrating from a managed service like Supabase to an on-premises or managed database service such as AWS RDS is a complex endeavor. It requires a strategic approach to ensure minimal disruption and data integrity. Begin by exporting users and schema from Supabase, noting that Supabase’s default users does not allow postgres to have superuser privileges .Thus, using the postgres user for migration tasks is advisable. Next, leverage logical replication to synchronize data with minimal downtime, and carefully validate data and sequences in the target environment.

Challenges include ensuring user permissions and system compatibility, as Supabase users cannot be granted replication roles and version differences may require specific handling. By meticulously planning, executing, and validating each step, organizations can smoothly transition their database infrastructure to AWS RDS, achieving better control, performance, and scalability.

Stay tuned and Happy Migrating …

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>