At the heart of what we do is a deep belief that great technology thrives in great communities. That’s why we’ve been actively investing our time, expertise, and resources into the PostgreSQL ecosystem — not just as users, but as contributors, speakers, sponsors, and volunteers. Here’s a look at what we’ve been up to recently and what’s on the horizon.
PGMumbai — February 7, 2026
We kicked off the month in Mumbai, where Hari took the stage to deliver a talk on “Building Active-Active PostgreSQL: Multi-Master Replication at Scale.” This session dived deep into one of the more complex challenges in distributed database architecture — enabling true multi-master setups where every node can handle both reads and writes simultaneously, at scale.
The talk was met with great enthusiasm from the audience, sparking rich discussions around conflict resolution strategies, latency trade-offs, and real-world implementation patterns. Beyond the technical content, we were proud to be one of the official sponsors of PGMumbai, supporting the local PostgreSQL community in bringing this event to life.
Here is the recording : https://www.youtube.com/live/UHtbhTi9XTQ?si=xR35tFoW29ipMNoH
PgVizag — February 14, 2026
Just a week later, we were at PgVizag Meetup5, where Lokesh delivered a comprehensive talk on “Oracle → PostgreSQL Migration Full Load with Ora2Pg + CDC with Debezium.” This is a topic close to many organizations’ hearts right now — the practicalities of migrating from Oracle to PostgreSQL without disrupting live operations.
Lokesh walked attendees through the full migration lifecycle: using Ora2Pg for the initial full load, and leveraging Debezium for Change Data Capture (CDC) to keep data in sync during the transition window. The talk offered actionable insights for teams navigating this migration journey, regardless of where they are in the process.
Here is the recording : https://youtu.be/_ZMzGJfTQgI?si=0tvUolDuJfunwGKe
We were also a proud sponsor of PgVizag, continuing our commitment to supporting PostgreSQL meetups and communities across India.
Here, are some of the questions asked during Lokesh’s talk:
- Instead of multiple CDC layers like Debezium & Kafka, why have you not considered other CDC tools.
Response:We did consider other CDC tools, including simpler options that do not require multiple layers. However, we selected Debezium with Apache Kafka because it provides a more reliable and scalable solution.
Some alternative tools use database triggers (for example, SymmetricDS). Trigger-based approaches can increase load on the source Oracle database, which may affect performance. In contrast, Debezium reads directly from Oracle redo logs, reducing impact on the source system.
Although adding Kafka introduces another layer, it offers important benefits such as fault tolerance, message durability, and the ability to replay data if needed. It also allows us to scale easily and support future use cases like additional downstream systems or real-time analytics.
In summary, while simpler CDC tools exist, we chose Debezium with Kafka because it provides better reliability, scalability, and long-term flexibility rather than just a simpler setup. - How does Debezium hold up as a CDC tool for large data sets (we had to replace Debezium with golden gate although it is expensive, because Debezium always breaks when using large data sets … e.g., DB size > 1TB)?.
Response: Debezium is a strong and widely used CDC tool, but its performance with very large databases (for example, systems larger than 1TB) depends greatly on how it is configured and how the overall infrastructure is designed.
Debezium does not continuously scan the entire database. Instead, it reads changes from Oracle redo logs. However, challenges typically arise in large environments due to scale and transaction volume rather than database size alone.
One common issue is the initial snapshot process. When Debezium first connects to a large database, it may need to capture the existing data before streaming changes. For multi-terabyte databases, this snapshot can take a long time and consume significant memory, CPU, and network resources. If not properly tuned, this phase may cause instability or connector failures.
Another challenge is the high volume of redo logs generated by large systems. Heavy transaction loads can produce a continuous stream of changes. If Kafka brokers, Kafka Connect workers, or system resources are not sized appropriately, backpressure can occur. This can lead to lag, increased memory usage, or connector restarts. In many cases, what appears as Debezium “breaking” is actually infrastructure bottlenecks.
Additionally, Debezium typically relies on Oracle LogMiner (unless XStream is used). LogMiner itself has limitations under very high throughput workloads. In contrast, Oracle GoldenGate is deeply integrated with Oracle’s internal mechanisms and is specifically optimized for high-volume enterprise environments. That is one reason why GoldenGate often performs more consistently in very large, mission-critical systems, although it comes with significant licensing costs.
In summary, Debezium can handle large datasets, but it requires careful tuning, proper infrastructure sizing, and operational expertise—especially in environments exceeding 1TB with high transaction rates. GoldenGate, while expensive, is purpose-built for enterprise-scale Oracle workloads and may offer greater operational stability in extremely large, high-throughput systems.
The decision ultimately depends on budget, required performance, system complexity, and the team’s ability to manage and tune a Kafka-based CDC architecture. - If any row gets deleted on the target, then do you have the functionality to resync missing data with Debezium+Kafka (Since data integrity is biggest constraint)?
Response:Debezium + Kafka does not automatically detect and repair target-side deletions. However, missing data can be recovered by:
i)Replaying retained Kafka events.
ii)Triggering a new snapshot from Oracle
iii)Running reconciliation processes
To ensure data integrity — especially when it is a critical constraint — CDC should be combined with monitoring, retention planning, and reconciliation strategies rather than relying on streaming alone.
PgHyd — February 20, 2026
In the third week of February, our team showed up to PGHyd Meetup #13 — and this time, in a different capacity. When it became clear that the audience was largely made up of students who might find deep technical sessions hard to follow, Hari stepped up at the last moment and pivoted his talk to deliver “All You Need is PostgreSQL” — a session crafted on the fly to make PostgreSQL approachable, relatable, and exciting for a younger audience.
The talk struck exactly the right chord, breaking down PostgreSQL’s core concepts in a way that resonated with beginners and sparked genuine curiosity in the room. It was a great reminder that community isn’t just about advancing the cutting edge — it’s equally about bringing the next generation along for the journey.
Here is the recording : https://youtu.be/-OZqb9aCSvI?si=wyUIg3j2YzflnWVn
Keerthi and Jeswita represented Postgres Women India sharing more about what they’re building at PWI — including the Upskill Program, Virtual Leadership Summit, Roadmap 2026, and Career Connect Day, all focused on enabling learning, leadership, and career growth across the ecosystem.
Here is the recording : https://youtu.be/3kQ9XJ6D-Tw?si=UN_jnc_JI7CEKqgr
Our team also volunteered at the event, helping organise and run things behind the scenes.
Looking Ahead: PGConf 2026
We have exciting news on the conference front — two talks from our team have been selected for PGConf 2026! We can’t wait to share more details on the sessions soon. This is a testament to the expertise our team has built and our commitment to contributing meaningful knowledge back to the community.
Stay tuned for the talks!!
Here are the schedule of our talks:
Inside PostgreSQL High Availability: Quorum, Split-Brain, and Failover at Scale by Venkat Akhil & Shashidhar Dakuri.
PG18 Hacktober: 31 Days of New Features by Hari Kiran.
See you there!!
Why This Matters to Us
Our involvement in the PostgreSQL community isn’t a marketing exercise — it’s a reflection of our values. We use PostgreSQL, we build on PostgreSQL, and we believe that investing in the community that sustains it is simply the right thing to do.
Whether it’s through technical talks that push the boundaries of what’s possible, sponsorships that help events reach more people, or volunteering that keeps the community running smoothly — we’re here for all of it.
We’re grateful for every organizer, attendee, and fellow community member who makes these events worthwhile. Here’s to more collaboration, more learning, and more PostgreSQL! 🐘
Follow us for updates on our upcoming PGConf 2026 sessions and future community activities.
