Session

Psychology of AI agents: how do we make them put their best foot forward?

Structured prompt engineering is widely regarded as best practice when working with large language models and agents built on LLMs. Role assignment, task clarity, output constraints, word limits: every guide says the same thing.
This talk tests that assumption, and the results are unsettling.
In a controlled study across five leading LLMs and 25 historical prompts, structured "best practice" prompting consistently underperformed compared to relational, messy, conversational prompting - producing fewer facts, lower interpretive depth, and in some cases actively wrong answers presented with complete confidence. Hallucinations occurred exclusively under structured conditions. Under relational conditions: zero.
One model increased its fact output by 1,200% and produced 82,000 words of factually accurate, coherent, hallucination-free content in a single session under messy, relational background.
This talk explains why, in plain language, and what developers can do about it Monday morning.

Annika Hedberg

Former physician researching how large language models actually think, and how to collaborate with them

Stockholm, Sweden

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top