Session

Confidently incorrect

By now, it’s clear that LLMs are transforming how we interact with information, automate workflows, and build intelligent applications. But what happens when these powerful systems “hallucinate”, i.e., confidently generate answers that sound right but are actually wrong?

In this session, we’ll demystify the phenomenon of AI hallucinations: why they happen, how often they occur in today’s top models, and why they matter for everyone from developers to business leaders. Drawing on the latest research and real-world examples, we’ll explore the technical roots of hallucinations, their impact on trust, safety, and operational efficiency. We will also take a look at evolving toolkit for reducing hallucinations—from retrieval-augmented generation and prompt engineering to automated fact-checking and human feedback.

This talk is rated at level 300. While we’ll dive deep into the technical causes and mitigation strategies for hallucinations, the session is designed to be accessible to a broad audience, including technologists, leaders, and anyone curious about the future of trustworthy AI.

Uroš Miletić

IPS, Chief Technology Officer

Prague, Czechia

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top