Session
Builders and Breakers: A Collaborative Look at Securing LLM-Integrated Apps
As Large Language Models (LLMs) become an integral part of modern applications, they not only enable new functionalities but also introduce unique security vulnerabilities. In this collaborative talk, we bring together two perspectives: a builder who has experience developing and defending LLM-integrated apps, and a penetration tester who specialises in AI red teaming. Together, we’ll dissect the evolving landscape of AI security.
On the defensive side, we’ll explore strategies like prompt injection prevention, input validation frameworks, and continuous testing to protect AI systems from adversarial attacks. From the offensive perspective, we’ll showcase how techniques like data poisoning and prompt manipulation are used to exploit vulnerabilities, as well as the risks tied to generative misuse that can lead to data leaks or unauthorised actions.
Through live demonstrations and real-world case studies, participants will witness both the attack and defence in action, gaining practical insights into securing AI-driven applications. Whether you’re developing AI apps or testing them for weaknesses, you’ll leave this session equipped with actionable knowledge on the latest methods for protecting LLM systems. This collaborative session offers a comprehensive look into AI security, combining the expertise of two professionals with distinct backgrounds - builder and breaker.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top