Riyas P
Staff Software Engineer, Harness
Bengaluru, India
Actions
Staff Software Engineer with nearly a decade of experience building and operating backend and infrastructure systems. He works on large-scale Kubernetes platforms with a focus on autoscaling, cost optimization, and reliability, where scaling decisions directly impact infrastructure cost, workload stability, and developer trust. His experience spans node autoscaling, workload placement, and coordinating cost-driven optimizations in production environments. Riyas is particularly interested in system-level thinking and designing infrastructure platforms that are reliable and usable at scale.
Area of Expertise
Topics
Demystifying Kubernetes Operators: Building a Dynamic Scheduler
Join us for an informative session as we unravel Kubernetes operators by creating a practical dynamic scheduler. Explore the automated scaling of Kubernetes workloads using a cron scheduler operator. Our discussion will encompass essential topics: Kubernetes Operator Framework, Custom Resources, reconcilers, finalizers, webhooks, and events.
Learn how to craft schedules for Kubernetes workloads through our own custom resources and visualize them on an external dashboard. Witness the seamless synchronization between the dashboard and the Kubernetes operator, enabling precise control and management of the schedules.
This talk is designed for Kubernetes enthusiasts and DevOps professionals seeking a deeper understanding of Kubernetes operators. You'll leave with actionable insights to build and enhance Kubernetes operators to help with your automation.
Autoscaling Is Not Optimization: Design Patterns for Budget-Aware Control Loops in Kubernetes
Kubernetes autoscaling mechanisms optimize for resource utilization and not for economics. Horizontal Pod Autoscalers react to metrics, and node provisioners such as Karpenter react to scheduling pressure. But neither system has any concept of financial constraints, budget ceilings, or business priorities. In real-world production environments, this creates an architectural gap: infrastructure scales correctly, yet cost overruns still occur.
This talk explores the design patterns behind introducing a budget-aware control layer on top of Kubernetes. We will examine why cost signals are invisible to the control plane, how external financial state can be modeled as a control loop, and what tradeoffs emerge when economic governance interacts with scheduling decisions.
Topics: The architectural limitations of metric-driven autoscaling, Designing external controllers that enforce budget policies, Stability vs. savings tradeoffs, Priority-aware workload reduction strategies
Attendees will leave with practical architectural patterns and a deeper understanding of where financial governance fits, and does not fit, in cloud-native systems.
Kubernetes Autoscaling Is Not a Solved Problem (Yet)
Kubernetes autoscaling is often treated as a solved problem. Enable HPA, add a node autoscaler such as Cluster Autoscaler or Karpenter, and expect the system to balance performance, reliability, and cost automatically. In real production environments, this assumption often breaks down.
This talk explores why autoscaling fails at scale, not because individual tools are broken, but because scaling decisions are made in isolation. Node autoscaling, workload autoscaling, bin packing, and cost optimization may work well independently, but their interactions introduce failure modes when combined in real clusters.
Based on production experience operating large Kubernetes environments, the session highlights pitfalls such as misleading utilization signals, disruption during scale-down events, and cost optimizations that negatively impact reliability. The talk reframes autoscaling as an orchestration problem and offers a practical mental model for evaluating trade-offs in production systems.
Getting Started with Karpenter: What to Expect in Real Clusters
Karpenter makes it easier for Kubernetes clusters to scale nodes based on workload demand, but many teams are unsure what to expect once they enable it in real environments.
This talk explains how Karpenter fits into Kubernetes autoscaling at a high level and what problems it is designed to solve. It walks through common expectations, typical behaviors teams observe after adoption, and early pitfalls such as unexpected scaling, cost surprises, and disruption during scale-down events.
Designed for engineers new to Karpenter or Kubernetes autoscaling, the session focuses on building intuition rather than configuration details, helping attendees understand when Karpenter helps, when it does not, and what questions to ask before using it in production.
Demystifying Kubernetes Operators: Building a Dynamic Scheduler
Join us for an informative session as we unravel Kubernetes operators by creating a practical dynamic scheduler. Explore the automated scaling of Kubernetes workloads using a cron scheduler operator. Our discussion will encompass essential topics: Kubernetes Operator Framework, Custom Resources, reconcilers, request lifecycle, finalizers, webhooks, and events.
Learn how to craft schedules for Kubernetes workloads through our own custom resources and visualize them on an external dashboard. Witness the seamless synchronization between the dashboard and the Kubernetes operator, enabling precise control and management of the schedules.
This talk is designed for Kubernetes enthusiasts and DevOps professionals seeking a deeper understanding of Kubernetes operators. You'll leave with actionable insights to build and enhance Kubernetes operators to help with your automation.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top