Session

Beyond Logger.Debug: Instrumenting Agentic Workflows with .NET and App Insights

Adding an LLM to your .NET application is easy. Understanding why it failed in production at 3:00 AM is not. Traditional application monitoring focuses on HTTP status codes and CPU spikes, but AI-enhanced applications introduce a new set of failures: semantic drift, "hallucination" timeouts, and runaway agent loops.

In this session, we’ll look at how to evolve your application’s instrumentation to keep pace with modern AI features. We will move past the basics of Azure Monitor and explore how to use Application Insights and OpenTelemetry to get a clear view of how your .NET code is actually interacting with intelligent models.

What we will cover at the Application Layer:

Tracing the Thought Process: Using Custom Spans to visualize "multi-turn" agent reasoning in the App Insights Transaction Timeline—so you can see exactly where an agent lost the plot.

Semantic Logging: Moving beyond strings to log "Semantic Metadata." Learn how to capture prompt versions, token usage, and "grounding scores" as custom properties on your telemetry.

Dependency Analysis for AI: How to treat LLM calls as external dependencies with specific "Success" criteria that aren't just "200 OK."

The Copilot for Your Logs: Using the new AI-driven query capabilities in Azure to ask your logs questions like, "Why did users start getting frustrated with the chatbot after the last deployment?"

If you are shipping AI features without specific application-level monitoring, you are flying blind. Join us to learn how to build apps that are as observable as they are intelligent.

Isaac Levin

Developer Advocate

Woodinville, Washington, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top