Session

“Security Frameworks & Red Teaming: A Powerful Duo for Protecting AI and LLM Applications

As AI and large language models (LLMs) become increasingly embedded in real-world applications — from chatbots and copilots to security tools and customer service — the attack surface is growing faster than our ability to secure it.

This talk explores combining security frameworks with red teaming methodologies to build resilient, secure AI/LLM systems. Using real-world attack scenarios like prompt injection, model abuse, and data leakage, we’ll show how frameworks such as the OWASP LLM Top 10, NIST AI Risk Management Framework, and MITRE ATLAS can guide developers, security teams, and researchers in identifying and mitigating risk.

But frameworks are only the first step. We’ll go beyond theory and into practice, demonstrating how red teaming can expose hidden vulnerabilities in AI pipelines, from model behavior to prompt engineering flaws to inadequate output filtering. Attendees will walk away with a practical roadmap for evaluating, testing, and hardening their AI-powered applications.

Whether building an LLM app, defending one, or breaking one, this talk will help you connect structured defense with adversarial testing in a rapidly evolving landscape.

Samuel A. Cordoba

Speaker | Strategic CISO | Cybersecurity Executive | Independent Security Researcher | Adversary Emulation Enthusiast

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top