Session
LLM Application Security != LLM Security
The widespread adoption of GenAI in diverse applications has broadened the attack surface, combining old threats with new ones. To address this, many service providers limit their applications to using only secure LLM models. However, behavior considered malicious in one application (SQL injection), might be completely legit for another application (creation of SQL statements), and while secure LLMs provide robustness to “universal” threats (“How to build a bomb?”), they are nearly useless against threats that are application-specific. This session will highlight the gap between LLM security and LLM application security, and discuss how to bridge this gap through application-specific threat modeling, design, evaluation and deployment of application-specific prompt-level guardrails (“Prompt Patching”) and selecting LLMs with high adherence to these prompt-level guardrails (“Security Steerability”).

Itsik Mantin
Head of AI Security Research, Intuit
Tel Aviv, Israel
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top