Session

Don't do anything (A)I wouldn't do

We all rely on AI tools every day, but would we take responsibility for the decisions they make?

This talk is a journey through AI incidents and the solutions developed to mitigate them. We begin by examining the safety risks associated with traditional Machine Learning models, including fairness, privacy, and other ethical concerns.

We then move into the age of LLMs and AI agents, which introduce new risks such as hallucinations, data leakage, and unintended tool execution. These failures are subtler, harder to detect, and because of agent autonomy, potentially more dangerous. Along the way, we demonstrate how safety techniques have evolved in tandem with technology to address both traditional and emerging risks.

Throughout the talk, we draw inspiration from real AI incidents to show that these are not just interesting research problems, but real-world failures with tangible consequences, reminding us why we should not let AI do anything we wouldn’t do.

Luca Corbucci

Ph.D. candidate in Computer Science, podcaster and community manager

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top