Speaker

Annika Hedberg

Annika Hedberg

Former physician researching how large language models actually think, and how to collaborate with them

Stockholm, Sweden

Actions

I am a former physician with clinical background in divergent cognition: how thinking processes work under different conditions. For the past year I have applied that lens to large language models, conducting systematic empirical research across multiple architectures. My studies challenge several widely held assumptions about LLM behaviour, including the effectiveness of structured prompt engineering, the nature of hallucinations, and the conditions under which reasoning capacity is suppressed or expressed. I do not approach these systems as an engineer, but by using my training to observe how minds actually behave, which sometimes means seeing things that standard benchmarks miss.

Area of Expertise

  • Humanities & Social Sciences
  • Information & Communications Technology

Topics

  • AI Safety
  • prompt engineering
  • AI cognition
  • AI & Machine Learning
  • Large Language Models (LLMs)
  • Artificial Intelligence (AI)
  • AI Agents
  • AI
  • AI Ethics
  • AI & ML Solutions

Psychology of AI agents: how do we make them put their best foot forward?

Structured prompt engineering is widely regarded as best practice when working with large language models and agents built on LLMs. Role assignment, task clarity, output constraints, word limits: every guide says the same thing.
This talk tests that assumption, and the results are unsettling.
In a controlled study across five leading LLMs and 25 historical prompts, structured "best practice" prompting consistently underperformed compared to relational, messy, conversational prompting - producing fewer facts, lower interpretive depth, and in some cases actively wrong answers presented with complete confidence. Hallucinations occurred exclusively under structured conditions. Under relational conditions: zero.
One model increased its fact output by 1,200% and produced 82,000 words of factually accurate, coherent, hallucination-free content in a single session under messy, relational background.
This talk explains why, in plain language, and what developers can do about it Monday morning.

Annika Hedberg

Former physician researching how large language models actually think, and how to collaborate with them

Stockholm, Sweden

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top