Session

Inside the Mind of an LLM

Recent studies in 2024 have revolutionised our understanding of large language models (LLMs).

This talk explores three key discoveries.

First, research shows Llama 2 models use English as their internal representation regardless of input/output language, explaining certain biases.

Second, breakthroughs by Anthropic and OpenAI have revealed monosemantic features in Claude 3 and GPT-4, enabling better understanding and adjustment of topic-specific behaviours.

Third, studies demonstrate why LLMs memorise outlier data, particularly unique strings and personal information, explaining instances of privacy breaches. We'll discuss implications for LLM privacy and security.

Attendees will walk away with a deeper understanding of the inner workings of LLMs, and with hints to mitigate their intrinsic limitations.

First delivered at Codemotion Conference 2024, Milan, Italy
Also held at WeAreDevelopers 2025, Berlin, Germany
Also held at PapersWeLove Milan 2024, Milan, Italy
Recorded session at https://youtu.be/m5qY4GNFEsA?si=a1SvJQYFVeQIcKZo

Emanuele Fabbiani

Head of AI at xtream, Professor at Catholic University of Milan

Milan, Italy

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top