Session
Understanding Prompt Injection - Techniques, Challenges, and Advanced Escalation
As developers, we’re embracing AI and large language models (LLMs) in our applications more than ever. However, there’s an increasing concern we need to be aware of: prompt injection. This sneaky attack can undermine our AI systems by manipulating the input to produce unintended outputs.
In this session, we’ll explain prompt injection and examine some common techniques attackers use, such as instruction overrides and hidden prompts. But we won't stop there; we’ll also explore advanced challenges, including escalation techniques that can exacerbate the risks.
Most importantly, we won’t just identify the problem. We’ll dive into practical steps you can take to mitigate these risks and keep your AI interactions secure. Join us to gain insights that will help you stay ahead in AI security and ensure your applications remain robust against these emerging threats.
Brian Vermeer
Java Champion | Staff Developer Advocate @ Snyk
Breda, The Netherlands
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top