Snigdha Kashyap
Contributing to #Tech as SDE-2 @ExpediaGroup
Gurugram, India
Actions
A Software Development Engineer with over three years of experience specialising in software development (majorly backend) and cloud technology. I have contributed to robust, scalable systems across various tech stacks in previous roles at Freecharge Payments and Airtel Payments Bank. With a zeal for building efficient, user-centric applications. A problem-solver and avid technologist.
Area of Expertise
Topics
The Hidden Cost of AI on Kubernetes (And How to Fix It Before Finance Calls You)
Everyone is deploying AI.
Almost no one is measuring what it truly costs.
AI workloads behave fundamentally differently from traditional microservices — they are GPU-hungry, memory-intensive, bursty, and often poorly autoscaled. On Kubernetes, that translates into idle GPUs, oversized nodes, inefficient bin-packing, runaway inference scaling, and hidden infrastructure waste.
And the scary part?
Most teams don’t even see it.
In this deep-dive session, we’ll explore what actually happens when AI workloads hit production Kubernetes clusters — especially in cloud environments like Azure.
We’ll break down:
• Why traditional autoscaling strategies fail for inference workloads
• GPU scheduling, bin-packing, and resource fragmentation problems
• Cost traps in model serving architectures
• Observability patterns to detect waste before it becomes a budget crisis
• Designing cost-aware AI platforms on Kubernetes
• Practical architecture patterns that balance performance, scalability, and cost
This session is not about hype. It’s about infrastructure reality.
If you are a cloud engineer, architect, or platform builder deploying AI systems — this talk will give you the mental models and architectural patterns to run AI workloads responsibly at scale.
Because scaling AI without cost visibility is not innovation.
It’s liability.
Responsible AI: Building the Right Intelligence, Rightly
As AI rapidly integrates into every layer of technology, the real challenge is no longer what AI can do, but how responsibly it does it.
This session explores the core principles of Responsible AI; fairness, transparency, accountability, and safety and how developers can operationalize them through real-world practices like AI governance, bias detection, risk assessment, and ethical system design.
From understanding generative AI’s dual nature to embedding ethics into every model lifecycle, this talk will help participants learn how to build AI that’s not just intelligent, but also trustworthy, compliant, and human-centric.
Engineering at Scale: Lessons from Building High-Throughput Systems on Open Source Stack
Modern applications demand real-time processing, massive scalability, and zero-downtime reliability. But building systems that can handle millions of events without breaking is harder than it looks.
In this talk, I’ll share real-world engineering lessons from building and operating a cloud-native, event-driven platform using Java, Spring Boot, Apache Kafka, Docker, and Kubernetes.
We’ll walk through how to design scalable microservices architectures, build reliable event pipelines, run production workloads on Kubernetes, and handle failures, traffic spikes, and performance bottlenecks. I’ll also cover observability, autoscaling, and cost optimization using open-source tooling.
This session is focused on practical design decisions, proven architectural patterns, and production learnings that engineers can directly apply to build resilient, high-performance distributed systems.
Invisible AI: the AI of modern living, a silent revolution
Note: SLIDES ARE RAW, and are subject to more improvement and additions, please do not consider it final, happy to be reached out for more info.
This talk uncovers how Artificial Intelligence is already embedded in our daily lives, often in ways we don’t even realize. From fraud detection in banking to Netflix recommendations, Uber surge pricing, spam filters, and real-time traffic predictions, AI is silently powering the systems we rely on every day. The session makes these invisible integrations relatable and easy to understand through real-world examples and interactive discussions.
Key Takeaways
• Understand how AI powers everyday experiences we often take for granted.
• Discover real-world applications of AI in finance, entertainment, e-commerce, and travel.
• Learn how Cloud technologies enable these AI-driven services at scale.
• Gain a new perspective on how AI shapes convenience, security, and decision-making behind the scenes.
• Leave with curiosity to spot invisible AI in your own daily routines.
Who Should Attend
• Tech professionals curious about practical AI applications.
• Students and early-career engineers who want to connect theory with real-world impact.
• Non-technical participants looking to understand AI’s everyday role without heavy jargon.
• Community members and enthusiasts who want to explore how AI quietly influences modern living.
Code That Thinks: Designing AI-Native Software Systems for the Next Decade
Artificial intelligence is no longer just a feature bolted onto existing products-it is rapidly becoming the foundation upon which modern software is built. This shift demands a fundamental rethink of how we design, develop, deploy, and scale applications.
In this session, we'll explore what it truly means to build Al-native systems: applications where intelligence is woven into the architecture itself, not merely layered on top. From orchestrating LLM-powered workflows and designing resilient human-in-the-loop systems to managing observability, cost, governance, and trust, we'll examine the architectural patterns that define the next generation of software.
Through real-world examples, practical design principles, and a live demonstration, attendees will learn how to move beyond experimental Al integrations and engineer production-ready intelligent systems that are scalable, reliable, and responsible.
Whether you're building internal developer tools, customer-facing products, or enterprise platforms, this session will equip you with the frameworks needed to create software that doesn't just execute instructions-but reasons, adapts, and evolves.
From Chaos to Control: AI-Optimized Kubernetes Workflow Management
As cloud-native systems scale, managing Kubernetes clusters has become increasingly complex. This session introduces how AI-driven tools are transforming Kubernetes workflows, from automating CI/CD pipelines to optimizing resource allocation and anomaly detection in production. We will explore cutting-edge open-source AI projects that integrate seamlessly with Kubernetes, demonstrating real-world applications that reduce human intervention while improving efficiency and reliability.
Attendees will leave with actionable insights and practical knowledge to adopt AI-powered automation in their Kubernetes environments effectively.
Innovating with Hybrid Agile: Merging Scrum and Kanban to Build Smarter Products
Building innovative products requires more than just sticking to standard Agile frameworks. Hybrid Agile blends the discipline of Scrum with the adaptability of Kanban, empowering product and tech teams to innovate faster and smarter.
This session will delve into
How combining Scrum and Kanban fosters a culture of continuous innovation.
Practical workflows and tools to align product vision with engineering delivery.
Real-world examples of successful product launches using Hybrid Agile.
Strategies for overcoming bottlenecks in product discovery and development cycles.
Attendees will learn how to adapt this approach for faster MVPs, seamless stakeholder collaboration, and a clear innovation roadmap.
Ethical decision making with Responsible AI
Introduction to responsible AI: Responsibility in the context of AI refers to ensuring that artificial intelligence systems are designed, developed, and deployed in an ethical, transparent, and accountable manner. This includes mitigating risks, protecting privacy, ensuring fairness, and making sure that AI does not harm individuals or society. Responsibility is not just about the behavior of the AI itself but also about how it is used by the people who create, regulate, and interact with it.
- Also, how different organisations define it in their ways.
In the context of AI, "Parampara" can be understood as the tradition of responsible creation and ethical use of technology. “Pratishtha" can be understood as the foundational establishment of AI systems that are built upon clear ethical guidelines and societal values. "Anushasan" refers to the discipline or regulation needed when developing and using AI systems.
Responsibilities of AI (Building AI Right). Components of the right AI
Responsibilities towards AI as developers, creators, users.
DOs and DON’Ts
Interesting references and a case study for accenture: https://www.accenture.com/us-en/case-studies/data-ai/blueprint-responsible-ai
Reaching Cloud 9 with Google Cloud’s GenAI tools
Cloud Nine isn’t just a phrase: it’s how developers feel with the new Gen AI tools from Google Cloud. Exploring how Google Cloud’s cutting-edge Gen AI tools are revolutionizing development, business operations, and end-user experiences; all while staying efficient, scalable, and accessible to everyone.
Includes the introduction to capabilities of tools like gemini, vertex AI, autoML and many more.
Devfest Mumbai 2025 Sessionize Event
Snigdha Kashyap
Contributing to #Tech as SDE-2 @ExpediaGroup
Gurugram, India
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top