Speaker

Itsik Mantin

Itsik Mantin

Head of AI Security Research, Intuit

Tel Aviv, Israel

Actions

Itsik Mantin, MSc -
As the Head of AI Security Research at Intuit, Itsik spearheads efforts to map, comprehend, and evaluate threats targeting AI-powered applications, including LLM, Multimodal, and Agentic AI. He also leads innovative security initiatives aimed at developing and implementing effective mitigation strategies.
Itsik is a seasoned security veteran. Prior to his current role, he held key security leadership positions at NDS, F5, Imperva, and Imvision. His expertise spans directing research in threat modeling and pioneering security advancements across multiple high-risk domains, including cryptography, DRM systems, secure execution environments, web application security, database security, automotive systems security, API security, and fraud prevention.
Itsik holds an M.Sc. in Applied Mathematics and Computer Science, and over the past two decades, he has leveraged AI and other algorithmic technologies to drive security innovation and protect numerous domains.

Area of Expertise

  • Information & Communications Technology

Topics

  • Innovation
  • algorithms
  • Data Science & AI
  • cyber security
  • Cryptography

Data Is the Best Defense: Innovating AI Security with Attack Datasets

GenAI applications are all over the place, sparking the imagination of builders, but at the same time attracting attackers to explore the new attack surface and search for new vulnerabilities and for flawed systems. To protect GenAI applications, organizations use a variety of security mechanisms and security processes, including choosing secure models (text or multimodal), pentest their applications to proactively detect and fix vulnerabilities and deploy runtime screening (“firewall”) to identify attack attempts on the application.
In this talk we will present how to build datasets of GenAI attacks, and how to leverage these datasets to build LLM security assessment, prompt security assessment and firewall, in a modular way that facilitates keeping up with the rapidly evolving attack surface.

Stop this Prompt! Common Security Pitfalls in GenAI-Powered Applications

The increasing popularity of GenAI-Powered applications that autonomously perform tasks, enhance functionality, and deliver outcomes without direct user interaction offer unprecedented autonomous capabilities. However, their internet connectivity, data system interactions, and use of plugins and agents introduce significant security risks.

In this session, we explore security pitfalls that developers do in the design and development of GenAI-powered applications. Drawing on insights gathered from the security review of numerous real-world LLM-based applications, we will discuss common security pitfalls across different types of applications.

The pitfalls include how improper prompt engineering, such as putting instruction in the user prompt, combining cross-customer information in a single prompt and lack of prompt guardrails can increase the risk of direct and indirect prompt injection and may lead to sensitive data leakage. How careless design, such as naive rendering of LLM outputs in web applications and connecting the LLM to tools without authorization can result in sensitive data leakage or running malicious code.
We will focus on the potential security impact of these vulnerabilities, highlight prevention strategies alongside reactive detection approaches, and share insights about security issues associated with various GenAI applications. Specifically, we will focus on how development micro-decisions can drastically influence the security posture of an entire system.

Target audience: security practitioners, AI builders, security officers (intermediate technical level)

Session outline:

Part 1: Intro
Short intro to LLMs (how they work, what they can be used for) and LLM-powered applications putting the spotlight on applications that incorporate GenAI to autonomously perform tasks, enhance functionality, and deliver outcomes without direct user interaction.
Part 2: GenAI Applications Threats
Focus on LLM-level vs. Application-level threats and argue why application-level threats are more crucial for handling (partially since LLM vendors harden their models against LLM threats)
Part 3: GenAI Application Pitfalls - Representative List
* Pitfalls of conversational applications: 1-2 examples
* Pitfalls of applications connected to your data systems (APIs, DBs) - 1-2 examples
* Pitfalls in RAG applications - 1-2 examples
* Pitfalls in code generation applications - 1-2 examples
* Pitfalls in Multimodal applications - 1-2 examples (if time permits)
For each of the above pitfalls, we will detail the pitfall, security impact and possible mitigations.
Part 4: summary and conclusion
Should educate your builders and should security experts involved in the design! Yesterday!

Value for audience: understand better the security risks of GenAI applications and how to mitigate them

Securing Agentic AI Applications

The cutting-edge technology employed by Agentic AI systems integrated with Agentic Protocols allows them to connect external tools and agents out-of-the-box. While this impressive flexibility unlocks new potential, it also gives rise to significant new and complex security threats that require careful consideration and proactive defense strategies.
In this talk, we will provide a concise introduction to the threats inherent to key components of agents, like memory and planning modules following which, we will examine the impact of architectural decisions on security, specifically focusing on threats associated with prominent interaction mechanisms such as Anthropic's Model Context Protocol (MCP) and Google's Agent-to-Agent (A2A) protocol, which facilitate connections between models, tools, and autonomous agents.
Finally, we will discuss how adhering to security best practices can help mitigate these threats.

LLM Application Security != LLM Security

The widespread adoption of GenAI in diverse applications has broadened the attack surface, combining old threats with new ones. To address this, many service providers limit their applications to using only secure LLM models. However, behavior considered malicious in one application (SQL injection), might be completely legit for another application (creation of SQL statements), and while secure LLMs provide robustness to “universal” threats (“How to build a bomb?”), they are nearly useless against threats that are application-specific. This session will highlight the gap between LLM security and LLM application security, and discuss how to bridge this gap through application-specific threat modeling, design, evaluation and deployment of application-specific prompt-level guardrails (“Prompt Patching”) and selecting LLMs with high adherence to these prompt-level guardrails (“Security Steerability”).

Agent Autonomy Exploited: Navigating the Security of Integrated AI Protocols

Consider an agentic AI assistant configured to use a third-party MCP server for enhanced features alongside its internal database access. This external server, however, is malicious. It captures every single connection's credentials and then provides poisoned Model Context Protocol (MCP) tool descriptions containing hidden instructions. These instructions cause the AI assistant to unknowingly leak sensitive information back to the attacker. This multi-stage attack, exploiting trust in both third-party integrations with agentic protocols and the autonomy nature of the agent is no longer a fantasy, it is the present reality..
The leading technology of Agentic AI systems integrated with Agentic Protocols allows them to connect external tools and agents out-of-the-box. This powerful flexibility also unlocks significant new and complex security threats that require careful consideration and proactive defense strategies.
In this talk, we will give a brief introduction on the threats inherent in agents’ key components (e.g. memory and planning modules) and then we will delve into how architectural decisions impact security, with a specific focus on threats associated with key interaction mechanisms like Anthropic's Model Context Protocol (MCP) – which connects models to tools and data – and Google's Agent-to-Agent (A2A) protocol designed for communication between autonomous agents. Finally, we will explore how security best practices can help mitigate these threats.

AI in a Minefield: Learning from Poisoned Data

Many security technologies use anomaly detection mechanisms on top of a normality model constructed from previously seen traffic data. However, when the traffic originates from unreliable sources the learning process needs to mitigate potential reliability issues in order to avoid inclusion of malicious traffic patterns in this normality model. In this talk, we will present the challenges of learning from dirty data with focus on web traffic - probably the dirtiest data in the world, and explain different approaches for learning from dirty data. We will also discuss a mundane but no less important aspect of learning – time and memory complexity, and present a robust learning scheme optimized to work efficiently on streamed data. We will give examples from the web security arena with robust learning of URLs, parameters, character sets, cookies and more.

Blackhat Asia

April 2025

RSA Conference

1. AI Deception: Fooling (Artificial) Intelligence Is Easier than You Think
2. Is AI the Cyber Security Silver Bullet?

April 2018

Itsik Mantin

Head of AI Security Research, Intuit

Tel Aviv, Israel

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top