Speaker

Sandeep Uttamchandani

Sandeep Uttamchandani

Making AI Real

Cupertino, California, United States

Actions

Sandeep Uttamchandani is the VP of Enterprise AI at Palo Alto Networks, bringing over 25 years of experience in developing and scaling AI and data products. His career spans from founding a startup to leading global teams at companies like IBM, VMware, and Intuit, consistently delivering measurable business impact. A recognized thought leader in the field, Sandeep is the best-selling author of "The Self-Service Data Roadmap," holds over 45 patents, and is a frequent speaker at industry conferences. He is passionate about democratizing AI and is a Ph.D. in AI/Expert Systems from the University of Illinois Urbana Champaign.

Area of Expertise

  • Business & Management
  • Information & Communications Technology

Topics

  • Artificial Intelligence
  • Data Science
  • data engineering
  • Software Engineering
  • Product

What is slowing down the productivity of Data Science & AI teams?

How effective is your Data Science & AI team in developing models and AI features? Models and features that are actually deployed and generating business value? While there are a plethora of tools available for teams to improve productivity, there is only a nascent understanding on the impact of other levers namely the right process, team design, data maturity, and literacy on the overall productivity.

The goal of this talk is to help leaders uncover blindspots and think holistically about team productivity beyond shiny new tools. The talk covers seven key patterns impacting productivity, and how they can be addressed. The patterns are based on practical experiences in building high performance Data Science and AI teams. Within the context of these patterns, I also cover how Generative AI can be leveraged for some of the patterns.

From 95% Failure to 95% Success: The Enterprise AI Playbook

A recent MIT study estimates 95% of AI projects die in the “POC-to-deploy” gap. This failure isn't just a model problem; it's a systems, strategy, and engineering problem. After burning resources on "science projects," we developed a 5-point playbook of repeatable engineering patterns and practices that inversed our failure rate. This session dives into the five specific, battle-tested patterns we now enforce for every deployment.

This is the blueprint for engineers, architects, and technical leaders responsible for shipping real AI. We'll cover:
1. Prioritizing the "Jagged Edge": AI is not magic; it has a "jagged edge" of superhuman highs and baffling lows. We'll share our fail-fast framework for surgically scoping projects, prioritizing high-impact, low-complexity problems that fit AI's strengths.
2. Context Engineering Over Model Tuning: We'll show why context engineering trumps costly fine-tuning. It’s cheaper, faster to iterate, and fundamentally more debuggable.
3. Designing for Debuggability: We'll share the capabilities we built in our architecture for building "compound AI systems." We'll show how we build modular, agentic systems where each component can be independently tested, versioned, and debugged, avoiding the monolithic "black box" nightmare.
4. Evals & Feedback Loops as a First Principle: A project without an eval framework is a project that's already failed. We'll share case studies and patterns for designing continuous evals and human-in-the-loop (HIL) feedback systems before writing a single line of model-specific code.
5. Secure AI with Modular Guardrails: How we architect for safety, security, and compliance. This covers our best practices for building scalable, independent guardrail services for input/output sanitization, content moderation, and security, treating safety as an engineering requirement, not an afterthought.

With the concrete patterns in the blueprint, you will leave start deploying robust, scalable, and maintainable AI systems.

Celebrate your AI graveyard!

AI teams invest a lot of rigor in defining new project guidelines. But the same is not true for killing existing projects. In the absence of clear guidelines, teams let infeasible projects drag on for months. By streamlining the process to fail fast on infeasible projects, teams can significantly increase their overall success with AI initiatives.

This talk covers how to fail fast on AI projects. AI projects have a lot more unknowns compared to traditional software projects: availability of right datasets, model training to meet required accuracy threshold, fairness and robustness of recommendations in production, and many more.
In order to fail fast, we manage AI initiatives as a conversion funnel analogous to marketing and sales funnels. Projects start at the top of the five-stage funnel and can drop off at any stage, either to be temporarily put on ice or permanently suspended and added to the AI graveyard. Each stage of the AI funnel defines a clear set of unknowns to be validated with a list of time-bound success criteria. In the talk, we cover details of the 5-stage funnel and experiences building a fail-fast culture where the AI graveyard is celebrated!

Sandeep Uttamchandani

Making AI Real

Cupertino, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top