Naresh Reddy Regalla
Cloud Solution Architect and Business Process Management Workflow Strategist
Chicago, Illinois, United States
Actions
Naresh Reddy Regalla is an accomplished IT leader and engineer with over 20 years of experience delivering enterprise technology solutions across banking, retail, and healthcare sectors. With deep expertise in business process automation and cloud-native solutions, he has established a strong reputation for driving innovation, operational efficiency, and compliance while enabling organizations to scale securely.
Naresh specializes in IBM Business Automation Workflow (BAW), IBM BPM, and Alfresco Activiti, with a proven track record in workflow design, process optimization, and legacy modernization. His cloud architecture expertise spans OpenShift, Kubernetes, AWS, and Pivotal Cloud Foundry, where he has delivered highly resilient, containerized enterprise applications. He is equally skilled as a full-stack engineer, proficient in Java, Spring Boot, REST APIs, and modern frontend frameworks such as React and Angular.
Currently serving as an Expert Application Engineer at Discover Financial Services, Naresh plays a pivotal role in payments fraud detection and risk prevention initiatives, safeguarding the global payments ecosystem. His leadership has contributed to critical solutions, including a secure One-Time Password API, transaction monitoring platforms, and advanced fraud prevention algorithms.
Naresh is a certified AWS Solutions Architect and IBM BPM expert, with a B.Tech in Computer Science & Engineering from Jawaharlal Nehru Technological University. Known for his collaborative leadership style, he has mentored high-performing teams and partnered with business stakeholders to deliver scalable and secure systems.
Passionate about digital transformation, Naresh continues to focus on bridging business and technology, ensuring enterprises remain future-ready in a rapidly evolving digital landscape.
Links
Area of Expertise
Topics
Mastering PostgreSQL Range Partitioning: A Strategic Approach to High-Volume Transaction Management
Modern organizations increasingly rely on PostgreSQL to handle extensive transactional workloads, time-series data, and real-time analytics pipelines. Yet performance challenges emerge rapidly once tables begin accumulating billions of rows. Traditional approaches, such as monolithic tables, DELETE-based cleanup, and non-pruned queries, result in slowdowns, excessive I/O pressure, bloated indexes, and increased operational complexity. This session presents an efficient, real-world implementation of advanced PostgreSQL partitioning and lifecycle strategies that significantly improved performance, scalability, and maintenance reliability in an environment processing more than 20–25 million transactions every day.
The talk begins with a clear explanation of the architectural differences between PostgreSQL and Oracle partitioning. Many organizations migrating from Oracle assume that PostgreSQL partitioning behaves in similar ways, but PostgreSQL introduces essential considerations: the requirement for including the partition key in primary and unique constraints, the need to explicitly attach indexes across partitions, and differences between global and local indexing compared to Oracle’s built-in features. By establishing this foundation, attendees can better understand why design decisions matter before implementing range-based or time-series partitioning at scale.
The next portion focuses on designing effective range-partitioned tables tailored for high-volume environments, specifically monthly and daily partitions. Using a real production example involving a fraud transaction monitoring system, I will show how the correct alignment of partition boundaries with natural query patterns drastically improves pruning efficiency. Queries that previously scanned massive datasets now target only relevant partitions, cutting execution times by 60–80% and reducing buffer cache pressure by up to 70%. The session will also cover indexing strategies, including automatic index propagation, partition-aware indexing, and the correct approach for parent-child index attachment to avoid operational surprises in production.
A core highlight of this session is the detach-and-drop method, a far superior approach for removing old partitions compared to traditional DELETE operations. DELETE-based cleanup becomes increasingly problematic at high scale due to massive dead tuples, bloated indexes, aggressive VACUUM requirements, and the risk of long-running transactions blocking other workloads. In contrast, the detach-and-drop method allows us to safely remove partitions online, reclaim storage instantly, and avoid locking contention. I will demonstrate how the CONCURRENTLY option (available in PostgreSQL 14+) eliminates ACCESS EXCLUSIVE locks during detachment, enabling zero-downtime data lifecycle operations even while the system continues serving active traffic.
Attendees will learn how to verify partition state after detachment, how to handle foreign key relationships, what to monitor during detach operations, and how to plan for performance impacts when dropping extremely large partitions. The session will also provide operational best practices such as naming conventions, retention-aligned partition planning, and automated partition rotation using PL/pgSQL procedures or pg_partman.
By the end of the talk, participants will walk away with a complete end-to-end blueprint for designing, tuning, and maintaining PostgreSQL partitioning in enterprise-scale environments. Whether you are a database engineer, architect, or performance specialist, this session equips you with practical, reproducible techniques that you can immediately apply to any high-growth PostgreSQL system.
Technical Requirements:
1. Standard conference room setup with projector/HDMI connection.
2. Ability to display SQL code, architecture diagrams, and EXPLAIN ANALYZE output.
3. No additional hardware required.
4. Optional: Wi-Fi access for demonstrating query plans (not mandatory).
First Public Delivery:
This session has not been delivered publicly before. It is based on real production implementations and internal engineering work. PGDATA 2026 will be the first public presentation of this material.
Target Audience:
1. PostgreSQL DBAs and database engineers
2. Data platform architects
3. Application engineers working with large transactional workloads
4. Teams migrating from Oracle to PostgreSQL
5. Anyone responsible for system performance, data lifecycle management, and partition maintenance in high-volume environments
Session Takeaways:
1. How to design efficient range-based partitions for time-series or fast-growing datasets
2. How to compare Oracle and PostgreSQL partitioning behaviors when planning migrations
3. Real-world performance gains from partition pruning and correct indexing strategies
4. How to safely remove partitions using the detach-and-drop method with zero downtime
5. Operational templates and partition lifecycle management best practices
Preferred Session Duration:
Ideal length: 45 minutes
Alternate formats also supported:
25-minute short talk (condensed version)
60–90-minute deep-dive workshop (expanded version with demos)
Prerequisite Knowledge (Suggested but Not Required):
1. Basic familiarity with PostgreSQL tables, indexes, and query plans
2. Understanding of transactional data patterns and retention requirements
3. Optional: prior experience with Oracle partitioning (helpful for comparison section)
Session Type & Style:
1. Technical, highly practical, example-driven presentation
2. No vendor pitching or product tie-ins
3. Real SQL examples, real performance numbers, and real operational challenges solved
Related Conferences Where This Topic Fits:
1. PGConf US
2. Postgres Build
3. POSETTE (formerly Citus Con)
4. Data Saturday events
5. AWS Summit / Azure Data Community Meetups (for cloud migration relevance)
6. Enterprise Postgres user groups
(This proposal is uniquely tailored for PGDATA 2026 but aligns well with the above audiences.)
Recording & Sharing:
This session may be recorded and shared publicly. All examples are anonymized and do not contain confidential or proprietary data.
Naresh Reddy Regalla
Cloud Solution Architect and Business Process Management Workflow Strategist
Chicago, Illinois, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top