Session

Responsible AI: Evaluating truthfulness in Azure OpenAI model outputs

In an era where information is both abundant and crucial, evaluating the truthfulness of data has become a significant challenge. This article focuses on the methodology and tools for assessing the veracity of information with Ground Truth Datasets, leveraging the powerful capabilities of Azure OpenAI models.

Ground Truth Datasets serve as a benchmark for verifying the accuracy and reliability of machine learning models. By providing a baseline of factual data, these datasets are essential for training and evaluating AI models to ensure they produce truthful and trustworthy outputs.

I will also explain how to evaluate your model when you do not have access to Ground Truth data by using LLMs for text inconsistency assessment.

Luis Beltran

Microsoft MVP

Zlín, Czechia

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top