Session

Mirror, mirror: LLMs and the illusion of humanity

Large language models (LLMs) exploded into mainstream awareness in 2022, and have continued to fascinate us since. But what is it about LLMs, compared to other, similarly complex algorithms, that have so captured our imagination? And why is it that we are so ready to believe that these models have started to show signs of human behavior?

In this talk, we’ll delve into some of the more extraordinary claims that have been made about LLMs in the past few years, including that these models are showing signs of sentience or intelligence. We’ll discuss why humans have a tendency to see such traits in these models, due to the way they mirror back a “lossy compression” of our humanity. And we’ll talk about how dispelling myths about LLMs being anything more than language models can help us apply them to their best current uses.

Jodie Burchell

Developer Advocate in Data Science

Berlin, Germany

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top