Session
Stop this Prompt! Common Security Pitfalls in GenAI-Powered Applications
The increasing popularity of GenAI-Powered applications that autonomously perform tasks, enhance functionality, and deliver outcomes without direct user interaction offer unprecedented autonomous capabilities. However, their internet connectivity, data system interactions, and use of plugins and agents introduce significant security risks.
In this session, we explore security pitfalls that developers do in the design and development of GenAI-powered applications. Drawing on insights gathered from the security review of numerous real-world LLM-based applications, we will discuss common security pitfalls across different types of applications.
The pitfalls include how improper prompt engineering, such as putting instruction in the user prompt, combining cross-customer information in a single prompt and lack of prompt guardrails can increase the risk of direct and indirect prompt injection and may lead to sensitive data leakage. How careless design, such as naive rendering of LLM outputs in web applications and connecting the LLM to tools without authorization can result in sensitive data leakage or running malicious code.
We will focus on the potential security impact of these vulnerabilities, highlight prevention strategies alongside reactive detection approaches, and share insights about security issues associated with various GenAI applications. Specifically, we will focus on how development micro-decisions can drastically influence the security posture of an entire system.
Target audience: security practitioners, AI builders, security officers (intermediate technical level)
Session outline:
Part 1: Intro
Short intro to LLMs (how they work, what they can be used for) and LLM-powered applications putting the spotlight on applications that incorporate GenAI to autonomously perform tasks, enhance functionality, and deliver outcomes without direct user interaction.
Part 2: GenAI Applications Threats
Focus on LLM-level vs. Application-level threats and argue why application-level threats are more crucial for handling (partially since LLM vendors harden their models against LLM threats)
Part 3: GenAI Application Pitfalls - Representative List
* Pitfalls of conversational applications: 1-2 examples
* Pitfalls of applications connected to your data systems (APIs, DBs) - 1-2 examples
* Pitfalls in RAG applications - 1-2 examples
* Pitfalls in code generation applications - 1-2 examples
* Pitfalls in Multimodal applications - 1-2 examples (if time permits)
For each of the above pitfalls, we will detail the pitfall, security impact and possible mitigations.
Part 4: summary and conclusion
Should educate your builders and should security experts involved in the design! Yesterday!
Value for audience: understand better the security risks of GenAI applications and how to mitigate them

Itsik Mantin
Head of AI Security Research, Intuit
Tel Aviv, Israel
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top