Session

Don't Trust Your Model: Zero Trust for the AI-Assisted SDLC

AI-assisted software development has revolutionized coding with tools like GitHub Copilot and Claude Code, but it introduces novel vulnerabilities such as prompt injection, context poisoning, and agent hijacking. This talk explores applying zero trust principles to the AI-assisted SDLC, evolving DevSecOps into ModSecOps for probabilistic systems. We'll cover a taxonomy of 22 threats, real-world incidents like the 2025 Anthropic breach, and defenses including guardrails, sandboxing, and multi-agent verification. Attendees will learn practical strategies to secure AI pipelines, from context assembly to tool calling, ensuring no default trust in models or outputs. Drawing from the OWASP GenAI Red Teaming Guide and recent research, this session provides actionable insights for developers and security pros to build resilient AI-driven workflows. No prior AI expertise required—focus on bridging security gaps in modern development.

Danny Gershman

Co-host and Author of the Before The Commit Podcast and Book

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top