Speaker

Naresh Reddy Regalla

Naresh Reddy Regalla

Cloud Solution Architect and Business Process Management Workflow Strategist

Chicago, Illinois, United States

Actions

Naresh Reddy Regalla is an accomplished IT leader and engineer with over 20 years of experience delivering enterprise technology solutions across banking, retail, and healthcare sectors. With deep expertise in business process automation and cloud-native solutions, he has established a strong reputation for driving innovation, operational efficiency, and compliance while enabling organizations to scale securely.
Naresh specializes in IBM Business Automation Workflow (BAW), IBM BPM, and Alfresco Activiti, with a proven track record in workflow design, process optimization, and legacy modernization. His cloud architecture expertise spans OpenShift, Kubernetes, AWS, and Pivotal Cloud Foundry, where he has delivered highly resilient, containerized enterprise applications. He is equally skilled as a full-stack engineer, proficient in Java, Spring Boot, REST APIs, and modern frontend frameworks such as React and Angular.
Currently serving as an Expert Application Engineer at Discover Financial Services, Naresh plays a pivotal role in payments fraud detection and risk prevention initiatives, safeguarding the global payments ecosystem. His leadership has contributed to critical solutions, including a secure One-Time Password API, transaction monitoring platforms, and advanced fraud prevention algorithms.
Naresh is a certified AWS Solutions Architect and IBM BPM expert, with a B.Tech in Computer Science & Engineering from Jawaharlal Nehru Technological University. Known for his collaborative leadership style, he has mentored high-performing teams and partnered with business stakeholders to deliver scalable and secure systems.
Passionate about digital transformation, Naresh continues to focus on bridging business and technology, ensuring enterprises remain future-ready in a rapidly evolving digital landscape.

Badges

Area of Expertise

  • Information & Communications Technology
  • Region & Country
  • Transports & Logistics
  • Travel & Tourism

Topics

  • PostgreSQL
  • AWS Architecture
  • Scalable System Design
  • REST API
  • AWS S3
  • AWS Aurora
  • AWS NLB
  • AWS Serverless
  • IBM BPM
  • BPM
  • Camunda

Partitioning for Performance: Automating the Detach and Cleanup Cycle

Migrating from Oracle to PostgreSQL often reveals unexpected architectural contention, particularly regarding global indexes and constraint enforcement. In a system processing 20–25 million daily transactions, these differences can quickly lead to index bloat and I/O spikes if you rely on traditional monolithic tables.

This session walks through a move to a range-partitioned architecture designed for high-volume data aging. We will explore how range partitioning improved performance by 60% and why replacing heavy DELETE operations with a detach-and-drop lifecycle is essential for pruning data without locking overhead.

We will also dive into the critical "aftercare" of these operations: using VACUUM and ANALYZE to manage system catalogs and ensure the planner stays accurate after large-scale data removal. By the end of the talk, you’ll see the real-world results of this strategy—including a 70% reduction in buffer cache pressure—and walk away with a practical blueprint for keeping high-growth databases manageable.

Technical Requirements:
1. Standard conference room setup with projector/HDMI connection.
2. Ability to display SQL code, architecture diagrams, and EXPLAIN ANALYZE output.
3. No additional hardware required.
4. Optional: Wi-Fi access for demonstrating query plans (not mandatory).

First Public Delivery:
This session has not been delivered publicly before. It is based on real production implementations and internal engineering work. PGDATA 2026 will be the first public presentation of this material.

Target Audience:
1. PostgreSQL DBAs and database engineers
2. Data platform architects
3. Application engineers working with large transactional workloads
4. Teams migrating from Oracle to PostgreSQL
5. Anyone responsible for system performance, data lifecycle management, and partition maintenance in high-volume environments

Session Takeaways:
1. How to design efficient range-based partitions for time-series or fast-growing datasets
2. How to compare Oracle and PostgreSQL partitioning behaviors when planning migrations
3. Real-world performance gains from partition pruning and correct indexing strategies
4. How to safely remove partitions using the detach-and-drop method with zero downtime
5. Operational templates and partition lifecycle management best practices

Preferred Session Duration:
Ideal length: 45 minutes

Alternate formats also supported:
25-minute short talk (condensed version)
60–90-minute deep-dive workshop (expanded version with demos)

Prerequisite Knowledge (Suggested but Not Required):
1. Basic familiarity with PostgreSQL tables, indexes, and query plans
2. Understanding of transactional data patterns and retention requirements
3. Optional: prior experience with Oracle partitioning (helpful for comparison section)

Session Type & Style:
1. Technical, highly practical, example-driven presentation
2. No vendor pitching or product tie-ins
3. Real SQL examples, real performance numbers, and real operational challenges solved

Related Conferences Where This Topic Fits:
1. PGConf US
2. Postgres Build
3. POSETTE (formerly Citus Con)
4. Data Saturday events
5. AWS Summit / Azure Data Community Meetups (for cloud migration relevance)
6. Enterprise Postgres user groups
(This proposal is uniquely tailored for PGDATA 2026 but aligns well with the above audiences.)

Recording & Sharing:
This session may be recorded and shared publicly. All examples are anonymized and do not contain confidential or proprietary data.

Index Skip Scans in Postgres 18: Optimizing Composite Index Performance

The leftmost Prefix Rule has long been a constraint in B-Tree index design, often forcing teams to maintain redundant indexes to cover different query patterns. Postgres 18 fundamentally transforms this with the introduction of Index Skip Scans, allowing the planner to utilize multi-column indexes even when the leading column is missing from the WHERE clause.

In this session, we will move past the feature announcement to explore the operational reality of Skip Scans. We will analyze the internal "jump" logic of the B-Tree traversal and use EXPLAIN (ANALYZE, BUFFERS) to compare execution costs between Postgres 17 and 18. I will share specific benchmarks on how column cardinality dictates the success of a skip scan and provide a framework for consolidating existing indexes to reduce storage overhead in high-volume environments.

Key Takeaways:
Internal Mechanics: How the Postgres 18 engine "skips" through index pages to find non-contiguous data.

The Cardinality Factor: Identifying the "sweet spot" for leading columns to ensure Skip Scans outperform traditional sequential scans.

Storage Optimization: Practical strategies for removing redundant indexes by leveraging multi-purpose composite indexes.

Planner Costing: How to interpret EXPLAIN output when Skip Scans are active and how the planner calculates the cost-benefit.

Session Metadata & Technical Requirements
Target Audience: * Primary: Database Administrators (DBAs), Backend Developers, and Data Architects.

Secondary: DevOps Engineers and Site Reliability Engineers (SREs) focused on performance tuning.

Experience Level: Intermediate. (Assumes basic knowledge of B-Tree structures and SQL performance tuning, but explains the new PG 18 mechanics from the ground up).

Preferred Session Duration: 45 Minutes (35-minute presentation + 10-minute Q&A).

First Public Delivery: Yes.

Session Track: PostgreSQL Internals, Performance Optimization, or Database Administration.

Technical Requirements & Logistics
Live Demo Environment: I will be demonstrating the Skip Scan behavior using a live PostgreSQL 18 instance (via Docker). I require a stable HDMI connection and a standard power outlet.

Key Learning Objectives
Deconstruct the internal B-Tree traversal logic that historically limited multi-column index usage.

Evaluate the specific conditions (low cardinality vs. high cardinality) where the PostgreSQL 18 Skip Scan provides the highest performance gains.
Implement a revised indexing strategy that reduces "index bloat" by leveraging more flexible composite indexes. Analyze execution plans (EXPLAIN ANALYZE) to identify when the planner is utilizing a Skip Scan versus a Sequential Scan.

Naresh Reddy Regalla

Cloud Solution Architect and Business Process Management Workflow Strategist

Chicago, Illinois, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top