Session

How to cure generative AI hallucinations with observability

Generative AI (GenAI) applications are known to “hallucinate,” or confidently respond with false or made-up information. Hallucinations are one of the biggest drawbacks to using this tech and are also famously hard to detect, prevent, and fix. But, researchers and engineers are racing to solve this problem as GenAI adoption increases.

So, for a DevOps team adopting this tech now, what’s the best way to tell when hallucinations happen, and most importantly, why?

Engineers, and the company’s bottom line, need a way to trace the path of the response—from the user’s prompt and integration with a large language model (LLM) to work by agents, API calls, and back to the end user. By monitoring every step, engineers can learn where hallucinations originate and apply a potential fix, like adding more context to the prompt or ensuring the LLM is re-summarizing responses accurately, so they can quickly take action and improve the quality of their results.

In this session, Jemiah Sius, Senior Director of Developer Relations at New Relic will explain how companies can use observability to help prevent GenAI hallucinations and share ways DevOps teams can improve their LLM integrations with context, agents, chained models, and more—with a focus on how to do it affordably.

In his role, Jemiah leads a global team focused on solving friction points for engineering teams—keeping them at the center of New Relic’s innovation strategy—so they can more easily find and fix issues before they impact their business and customers.

With that, he understands the challenges DevOps teams face and how, if deployed strategically, GenAI solutions can help solve those challenges.

Jemiah Sius

New Relic | Senior Director, Developer Relations

Miami, Florida, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top