Session
AI Observability and Evaluation: Ensuring Trustworthy Intelligent Systems
As AI systems become central to enterprise operations, their reliability, transparency, and performance are critical. This session explores the foundations of AI observability—covering monitoring, tracing, and debugging intelligent workflows—and introduces modern strategies for evaluating agentic systems. Participants will learn practical methods for assessing AI models and agent behavior, designing effective monitoring frameworks, and measuring outcomes through rigorous evaluation protocols. Drawing on real-world enterprise use cases, the talk highlights best practices, emerging tools, and proven approaches for building accountable, trustworthy AI systems.
Naveen Kumar Kotha
Senior Staff Software Engineer,Blue Yonder
Dallas, Texas, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top