Session
Streamlining LLM Performance Evaluation with .NET and Microsoft.Extensions.AI.Evaluation
Evaluating the performance of an AI model is a crucial step in the development process, and this should not be missed in your LLM accelerated .NET App. In this session, we will dive deep into the world of Large Language Model (LLM) evaluation. You will discover how to effortlessly set up an evaluation pipeline using Microsoft.Extensions.AI.Evaluation to assess the performance of your LLM App. Additionally, we will explore how to automate this process within your Azure DevOps pipeline, ensuring a seamless and efficient workflow.
Join us to unlock the full potential of your AI models and streamline your development process to make the quality of your AI App measurable. This will help you boost the quality but also maintainability of your App!

Cédric Mendelin
isolutions, senior software developer, team lead deputy
Basel, Switzerland
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top