© Mapbox, © OpenStreetMap

Speaker

Puspanjali Sarma

Puspanjali Sarma

Engineering Leader | Principal Architect | Published Author | Thought Leader | Mentor | Speaker | 40under40 Data Scientist | ML | AI | Data Engineering | Generative AI | Agents & Agentic AI

Hyderābād, India

Actions

I’ve spent 15+ years building data and AI systems—and what keeps me motivated is simple: turning ambitious ideas into dependable, production-ready capabilities that teams can trust.

Today, I’m a 𝗦𝗲𝗻𝗶𝗼𝗿 𝗠𝗮𝗻𝗮𝗴𝗲𝗿 at 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝗡𝗼𝘄, leading AI platform innovations across Machine Learning, Generative AI, and data engineering. I enjoy the end-to-end work: aligning to business outcomes, designing the architecture, and partnering with teams to ship high-quality solutions. I’ve worked across Fortune 500 enterprises and global startups, and I’ve learned one pattern repeatedly—AI doesn’t scale because of a great model; it scales because the system around it is designed and operated well.

That belief is what led me to write my published book, 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗰 𝗔𝗜 𝗟𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽 𝗧𝗵𝗿𝗼𝘂𝗴𝗵 𝗗𝗮𝘁𝗮—a practical playbook on building strong data foundations, responsible AI governance, and leadership habits that make AI durable, not fragile.

I was recognized among 𝗜𝗻𝗱𝗶𝗮’𝘀 𝗔𝗜𝗠 𝟰𝟬 𝗨𝗻𝗱𝗲𝗿 𝟰𝟬 𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝘁𝗶𝘀𝘁𝘀 (𝟮𝟬𝟮𝟱), and I’ve helped deliver scalable AI frameworks across BFSI, Healthcare, Retail, and Manufacturing. Mentorship is also important to me—I support communities such as WEHUB and women-in-tech networks, and I write and speak about ethical AI and inclusive team building.

I’m a 𝗗𝗘𝗜 𝗮𝗱𝘃𝗼𝗰𝗮𝘁𝗲 because inclusion isn’t a statement—it’s a daily leadership practice. I care about building teams where people feel safe to contribute, grow, and lead.

If you’d like to exchange ideas on scaling AI responsibly, building data platforms, or making GenAI work in real enterprise environments—reach out. I’m always happy to connect.

Area of Expertise

  • Business & Management
  • Information & Communications Technology
  • Region & Country

Topics

  • Machine Learning and Artificial Intelligence
  • The Future of Artificial Intelligence: Trends and Transformations
  • Startup Innovation & Creativity
  • Generative AI
  • Data Engineering
  • Democratized Artificial Intelligence
  • Women in Leadership
  • Technology Innovation

From RAG to Reliable Agents: Productionizing AI on Kubernetes

Agentic AI is moving beyond RAG into systems that plan, execute, and interact with tools across enterprise environments. However, productionizing these systems introduces challenges around reliability, observability, and governance.

This session presents a cloud-native architecture for building Agentic AI systems on Kubernetes. We will cover multi-agent orchestration, tool execution layers (APIs, SQL), memory management, and integration with data platforms. The talk also introduces evaluation-driven development practices such as shadow testing, confidence scoring, and regression validation for LLM outputs.

In addition, we will explore guardrails for safe execution and observability patterns using tracing, metrics, and feedback loops to monitor agent behavior.

Through real-world case studies in AI-driven automation and analytics systems, attendees will gain practical design patterns to build scalable, reliable, and production-ready AI systems using cloud-native principles.

From RAG to Reliable Agents: An Open Source Playbook for Evaluation, Guardrails, and LLMOps

Teams are moving from Retrieval-Augmented Generation (RAG) to agentic workflows that plan, call tools, and take actions. The hard part is no longer making a demo work; it is making behavior reliable, safe, and observable in production.

This session presents a practical, open-source “Day 2” playbook for building trustworthy agents. We cover three pillars:

Offline evaluation: automated eval harnesses using heuristic metrics (groundedness/faithfulness, relevancy) plus agent-specific checks like tool-call correctness and step success rate, with regression gates before release.

Runtime guardrails: interceptors to prevent prompt injection impact, sensitive data leakage, unsafe outputs, and unauthorized tool actions via allowlists, policy checks, and redaction.

LLMOps and observability: tracing and structured telemetry to debug multi-turn tool execution, localize failures (retrieval vs planning vs tool), and monitor drift, latency, and cost.

Attendees leave with a reference architecture, metric checklist, and implementation patterns using open-source components (e.g., Ragas/DeepEval for evals, guardrail libraries, Langfuse/OpenTelemetry-style tracing).

Puspanjali Sarma

Engineering Leader | Principal Architect | Published Author | Thought Leader | Mentor | Speaker | 40under40 Data Scientist | ML | AI | Data Engineering | Generative AI | Agents & Agentic AI

Hyderābād, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top