Session

The Linguistics of Large Language Models: What Your AI's Mistakes Reveal

When GPT-4 writes "I'll send you the attachment later" (without any ability to send attachments) or ChatGPT claims it can "see" an image that isn't there, what's really happening? This talk dives into the fascinating patterns of AI hallucinations, exploring how linguistic analysis of AI errors provides unique insights into how these models actually work. Through live examples, we'll examine common patterns of LLM mistakes and what they reveal about the underlying architecture and limitations of current AI systems.

Key Points:
1. Common patterns in AI hallucinations and their linguistic roots
2. The disconnect between capability claims and actual abilities
3. How context windows influence AI behavior
4. Understanding prompt injection through linguistic analysis
5. Real-world examples of AI linguistic patterns
6. What these patterns tell us about future AI development

Chaitanya Rahalkar

Software Security Engineer at Block Inc. (f.k.a. Square Inc.)

Austin, Texas, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top