Sai Nikhil Donthi
LTIMIndtree- IT Technical Lead
Houston, Texas, United States
Actions
I am a technical lead and cloud automation specialist with a strong focus on Microsoft Azure and DevOps-driven infrastructure. My work bridges research and practice in dynamic scaling, automation, and workload optimization, with particular expertise in Ansible-based orchestration for Platform-as-a-Service (PaaS) environments. I have published research on scaling strategies for refinery turnaround workloads, highlighting cold-start latency, cost–performance tradeoffs, and multi-tier elasticity. Passionate about practical cloud solutions, Excited to share insights on how automation frameworks can improve reliability, efficiency, and cost-effectiveness in time-sensitive industrial operations.
Links
Area of Expertise
Topics
Evaluating effectiveness of Delta Lake over Parquet
We have been witnessing rapid growth of data-
intensive applications adopting efficient columnar storage
formats, with Parquet becoming a widely used standard in
modern data pipeline. Parquet has been efficient than traditional
databases in the aspect of columnar storage, evolution of schema, efficient compression, and vast range of supported tools.
Parquet still had issues of capturing transactional logs and didn’t
support ACID properties or atomic writes, which resulted in
data corruption and metadata operations became expensive.
Delta Lake, an open-source storage layer built on Parquet,
addresses these limitations by introducing ACID transactions,
schema evolution, time travel, and unified batch and streaming
support. The study conducted on evaluating effectiveness of
Delta Lake over Apache Parquet helps to gather and take
advantage of the key benefits that delta lake provides in the
world of big data and the optimization techniques used in
Microsoft fabric as the baseline with delta lake being the
backend storage bin. With the extensive increase of data driven
critical applications in the IT sector, the columnar storage
orientated data formats such as Apache parquet has become the
industry standard in the big data world. Though parquet can
compress and store different data formats like Json, xml, audio,
csv, etc but it lacks the ACID properties that delta lake offers.
Parquet formats offer high data compression efficiency and
rapid query execution on the other hand lacks schema
enforcement, reliability, and transaction guarantees which is
crucial in this modern world. In short Delta Lake, an extension
of parquet is an open-source storage layer introduced on top of
parquet with delta logs that store incremental transaction logs
helps to eliminate the limitations of parquet by adding ACID
transactions, schema evolution, time travel and unified batch
streaming support. This research inquires that parquet offsets to
read low optimal workloads (does not support parallel operations and require batch processing) while delta lake provides remarkable advantages for heavy workloads that require data versioning, reliability, and parallel execution.
Evaluating effectiveness of Delta Lake over Parquet
We have been witnessing rapid growth of data-
intensive applications adopting efficient columnar storage
formats, with Parquet becoming a widely used standard in
modern data pipeline. Parquet has been efficient than traditional
databases in the aspect of columnar storage, evolution of schema,
efficient compression, and vast range of supported tools. While
Parquet still had issues of capturing transactional logs and didn’t
support ACID properties or atomic writes, which resulted in
data corruption and metadata operations became expensive.
Delta Lake, an open-source storage layer built on Parquet,
addresses these limitations by introducing ACID transactions,
schema evolution, time travel, and unified batch and streaming
support. The study conducted on evaluating effectiveness of
Delta Lake over Apache Parquet helps to gather and take
advantage of the key benefits that delta lake provides in the
world of big data and the optimization techniques used in
Microsoft fabric as the baseline with delta lake being the
backend storage bin. With the extensive increase of data driven
critical applications in the IT sector, the columnar storage
orientated data formats such as Apache parquet has become the
industry standard in the big data world. Though parquet can
compress and store different data formats like Json, xml, audio,
csv, etc but it lacks the ACID properties that delta lake offers.
Parquet formats offer high data compression efficiency and
rapid query execution on the other hand lacks schema
enforcement, reliability, and transaction guarantees which is
crucial in this modern world. In short Delta Lake, an extension
of parquet is an open-source storage layer introduced on top of
parquet with delta logs that store incremental transaction logs
helps to eliminate the limitations of parquet by adding ACID
transactions, schema evolution, time travel and unified batch
streaming support. This research inquires that parquet offsets to
read low optimal workloads (does not support parallel operations
and require batch processing) while delta lake provides
remarkable advantages for heavy workloads that require data
versioning, reliability, and parallel execution.
Dynamic Cloud Scaling for Refinery Turnarounds: Ansible-Powered Automation in Azure
Oil and gas refineries face intense workload surges during scheduled maintenance windows, especially when deploying custom-built turnaround applications in Azure. Static scaling strategies often fail to meet these demands, leading to latency, performance degradation, and rising operational costs.
In this session, we’ll explore a novel Ansible-based automation framework designed to dynamically scale Azure App Services in real time—tailored specifically for refinery turnaround workloads. You’ll learn how infrastructure-as-code and agentless automation can be combined with continuous monitoring to create adaptive, latency-aware scaling workflows. We’ll dive into the challenges of cold starts, frequent up/down scaling, and Ansible playbook execution delays—and how this framework overcomes them.
Attendees will walk away with:
A practical blueprint for dynamic resource provisioning in Azure using Ansible
Insights into reducing resource waste and improving application performance
Strategies for minimizing human error through automated configuration management
Lessons applicable across industries with similar workload variability
Whether you're in industrial operations, cloud architecture, or DevOps, this session offers actionable techniques to optimize performance and cost-efficiency in complex environments.
Dynamic Cloud Scaling for Refinery Turnarounds: Ansible-Powered Automation in Azure
Oil and gas refineries face intense workload surges during scheduled maintenance windows, especially when deploying custom-built turnaround applications in Azure. Static scaling strategies often fail to meet these demands, leading to latency, performance degradation, and rising operational costs.
In this session, we’ll explore a novel Ansible-based automation framework designed to dynamically scale Azure App Services in real time—tailored specifically for refinery turnaround workloads. You’ll learn how infrastructure-as-code and agentless automation can be combined with continuous monitoring to create adaptive, latency-aware scaling workflows. We’ll dive into the challenges of cold starts, frequent up/down scaling, and Ansible playbook execution delays—and how this framework overcomes them.
Attendees will walk away with:
A practical blueprint for dynamic resource provisioning in Azure using Ansible
Insights into reducing resource waste and improving application performance
Strategies for minimizing human error through automated configuration management
Lessons applicable across industries with similar workload variability
Whether you're in industrial operations, cloud architecture, or DevOps, this session offers actionable techniques to optimize performance and cost-efficiency in complex environments.
Dynamic Cloud Scaling for Refinery Turnarounds: Ansible-Powered Automation in Azure
Oil and gas refineries face intense workload surges during scheduled maintenance windows, especially when deploying custom-built turnaround applications in Azure. Static scaling strategies often fail to meet these demands, leading to latency, performance degradation, and rising operational costs.
In this session, we’ll explore a novel Ansible-based automation framework designed to dynamically scale Azure App Services in real time—tailored specifically for refinery turnaround workloads. You’ll learn how infrastructure-as-code and agentless automation can be combined with continuous monitoring to create adaptive, latency-aware scaling workflows. We’ll dive into the challenges of cold starts, frequent up/down scaling, and Ansible playbook execution delays—and how this framework overcomes them.
Attendees will walk away with:
A practical blueprint for dynamic resource provisioning in Azure using Ansible
Insights into reducing resource waste and improving application performance
Strategies for minimizing human error through automated configuration management
Lessons applicable across industries with similar workload variability
Whether you're in industrial operations, cloud architecture, or DevOps, this session offers actionable techniques to optimize performance and cost-efficiency in complex environments.
Custom Connectors in Power Apps
Custom connectors in Power Apps unlock endless possibilities by connecting your apps to any external API or service. They bridge the gap where standard connectors end, enabling you to bring in unique business systems and third-party services seamlessly. Build once, reuse everywhere.
Custom Connectors in Power Apps and azure
Custom connectors in Power Apps unlock endless possibilities by connecting your apps to any external API or service. They bridge the gap where standard connectors end, enabling you to bring in unique business systems and third-party services seamlessly. Build once, reuse everywhere.
Dynamic Cloud Scaling for Refinery Turnarounds: Ansible-Powered Automation in Azure
Session Description: Oil and gas refineries face intense workload surges during scheduled maintenance windows, especially when deploying custom-built turnaround applications in Azure. Static scaling strategies often fail to meet these demands, leading to latency, performance issue and rising operational costs.
In this session, we’ll explore a novel Ansible-based automation framework designed to dynamically scale Azure App Services in real time—tailored specifically for refinery turnaround workloads. You’ll learn how infrastructure-as-code and agentless automation can be combined with continuous monitoring to create adaptive, latency-aware scaling workflows. We’ll dive into the challenges of cold starts, frequent up/down scaling, and Ansible playbook execution delays—and how this framework overcomes them.Whether you're in industrial operations, cloud architecture, or DevOps, this session offers actionable techniques to optimize performance and cost-efficiency in complex environments.
Custom Connectors in Power Apps and azure
Custom connectors in Power Apps unlock endless possibilities by connecting your apps to any external API or service. They bridge the gap where standard connectors end, enabling you to bring in unique business systems and third-party services seamlessly. Build once, reuse everywhere inazure
Custom Connectors in Power Apps and azure
Custom connectors in Power Apps unlock endless possibilities by connecting your apps to any external API or service. They bridge the gap where standard connectors end, enabling you to bring in unique business systems and third-party services seamlessly. Build once, reuse everywhere.
Evaluating Ansible-Based end to end Dynamic Scaling analysing Cold-Start Latency, Cost–Performance,
Oil and gas refineries plan maintenance schedules well in advance, and all business-critical applications are scheduled for use during this maintenance window to record activities status and track them for closure. As oil and gas refinery turnaround workloads increase on a need basis, they require scalable cloud solutions, particularly for their custom-built turnaround critical applications in Azure environments. Robust and adaptable resource management strategies are needed. We see frequent and severe consumption spikes related to refinery maintenance activities during turnaround periods that may cause significant performance challenges and result in operational delays and increased costs. The common problems of static scaling techniques are addressed in this study, which result in resource inefficiencies and decreased system responsiveness during these peak times that occur due to cold start latency of azure services, more often upscaling and downscaling of azure and latency delays while executing ansible playbooks. There is still a research gap in using Ansible for dynamic, real-time scaling of Azure Services that is suited to refinery turnaround workloads, even with the lot variety of automation tools available. Using a mixed-methods approach, this study develops and evaluates an Ansible-based automation framework that dynamically adjusts Azure App Service instances in response to real-time workload data. The approach combines infrastructure-as-code approaches with continuous monitoring to enable independent agentless, and adaptive automation workflows. Research findings indicate that by decreasing resource waste, enhancing application performance, and lowering related expenses during maintenance windows, the proposed framework greatly improves the efficiency of operations. Furthermore, automated and consistent configuration management minimizes human error and intervention, improving system reliability. These findings highlight the usefulness of optimizing cloud resources in refinery operations and point to increased adaptability across different sectors using identical workload fluctuations. By integrating cloud-based scaling and automation frameworks in industrial settings, the study adds to the body of knowledge by providing an adaptable framework for dynamic resource provisioning. In complex operational environments, it seems that combining Ansible integration analysing the latency metrics of Azure Services and enabling end to end scaling of all the services involved is an effective way to maintain cost-effectiveness and performance requirements.
Sai Nikhil Donthi
LTIMIndtree- IT Technical Lead
Houston, Texas, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top