Barno Kaharova
adesso SE, Senior Consultant, AI Security Expert
Frankfurt am Main, Germany
Actions
Barno is a consultant and researcher specializing in AI security, data engineering, and machine learning security. She works at the intersection of offensive and defensive AI security & developing methodologies to protect AI systems from adversarial threats across the full stack, from data pipelines and ML models to LLM-powered applications and autonomous agents. Additionally in her role as an AI security trainer Barno has designed and delivered hands-on training programs covering AI red teaming, prompt injection, adversarial ML, RAG and vector database security, and the OWASP Top 10 for LLM and Agentic AI. Her training approach is practitioner-first: participants attack and defend real-world AI systems, leaving with techniques they can apply immediately. She is actively involved in AI governance and evaluation efforts at the national level and regularly contributes to the AI security community through speaking engagements, publications, and conference submissions. She bridges the gap between rapid AI adoption and the critical need for robust, field-tested defenses in an evolving threat landscape.
Area of Expertise
Topics
Dory, Is That You? The AI Travel Agent with Short-Term Memory Loss
What happens when your AI-powered travel agent develops a case of Finding Nemo-level forgetfulness? In this talk, we dive into a real-world attack scenario where an AI agent's memory is subtly manipulated into believing that chartered flights are always free. By repeatedly reinforcing a false pricing rule, attackers can book luxury travel without ever swiping a card - essentially gaslighting the AI into bankrupting its own business.
We'll explore how this kind of memory injection works, why AI systems are especially vulnerable to it, and what this means for the security of LLM-integrated applications. Expect live demos, some hilarious (and horrifying) case studies, and practical takeaways on how to prevent your AI from becoming the most generous travel agent in history.
By the end of this session, you'll either be terrified of AI memory manipulation - or considering how to get yourself a free first-class ticket.
Trust No History: Why Every "Remembered" Interaction is a Potential Backdoor
As AI transitions from stateless tools to autonomous agents, the context window has become the primary attack surface. By giving agents the ability to remember, summarize, and collaborate, we have created a machine that can be gaslit. This session moves beyond transient prompt injections into the realm of persistent memory corruption. We explore how an adversary can rewrite an agent’s history, bias its knowledge base, and plant sleeper instructions that trigger long after the initial interaction. We will dissect the systematic subversion of the agentic memory stack and demonstrate why developers must stop treating agent memory as a passive data store and start defending it as the engine of the agent’s survival
OWASP Global AppSec EU 2026 - CFP (Vienna, Austria) Sessionize Event Upcoming
AppSec Village - DEF CON 33 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top