Speaker

Itsik Mantin

Itsik Mantin

Head of AI Security Research, Intuit

Tel Aviv, Israel

Actions

Itsik Mantin, MSc -
As the Head of AI Security Research at Intuit, Itsik spearheads efforts to map, comprehend, and evaluate threats targeting AI-powered applications, including LLM, Multimodal, and Agentic AI. He also leads innovative security initiatives aimed at developing and implementing effective mitigation strategies.
Itsik is a seasoned security veteran. Prior to his current role, he held key security leadership positions at NDS, F5, Imperva, and Imvision. His expertise spans directing research in threat modeling and pioneering security advancements across multiple high-risk domains, including cryptography, DRM systems, secure execution environments, web application security, database security, automotive systems security, API security, and fraud prevention.
Itsik holds an M.Sc. in Applied Mathematics and Computer Science, and over the past two decades, he has leveraged AI and other algorithmic technologies to drive security innovation and protect numerous domains.

Area of Expertise

  • Information & Communications Technology

Topics

  • Innovation
  • algorithms
  • Data Science & AI
  • cyber security
  • Cryptography

Agent Autonomy Exploited: Navigating the Security of Integrated AI Protocols

Consider an agentic AI assistant configured to use a third-party MCP server for enhanced features alongside its internal database access. This external server, however, is malicious. It captures every single connection's credentials and then provides poisoned Model Context Protocol (MCP) tool descriptions containing hidden instructions. These instructions cause the AI assistant to unknowingly leak sensitive information back to the attacker. This multi-stage attack, exploiting trust in both third-party integrations with agentic protocols and the autonomy nature of the agent is no longer a fantasy, it is the present reality..
The leading technology of Agentic AI systems integrated with Agentic Protocols allows them to connect external tools and agents out-of-the-box. This powerful flexibility also unlocks significant new and complex security threats that require careful consideration and proactive defense strategies.
In this talk, we will give a brief introduction on the threats inherent in agents’ key components (e.g. memory and planning modules) and then we will delve into how architectural decisions impact security, with a specific focus on threats associated with key interaction mechanisms like Anthropic's Model Context Protocol (MCP) – which connects models to tools and data – and Google's Agent-to-Agent (A2A) protocol designed for communication between autonomous agents. Finally, we will explore how security best practices can help mitigate these threats.

AI in a Minefield: Learning from Poisoned Data

Many security technologies use anomaly detection mechanisms on top of a normality model constructed from previously seen traffic data. However, when the traffic originates from unreliable sources the learning process needs to mitigate potential reliability issues in order to avoid inclusion of malicious traffic patterns in this normality model. In this talk, we will present the challenges of learning from dirty data with focus on web traffic - probably the dirtiest data in the world, and explain different approaches for learning from dirty data. We will also discuss a mundane but no less important aspect of learning – time and memory complexity, and present a robust learning scheme optimized to work efficiently on streamed data. We will give examples from the web security arena with robust learning of URLs, parameters, character sets, cookies and more.

Itsik Mantin

Head of AI Security Research, Intuit

Tel Aviv, Israel

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top