Session
Security Considerations For An AI-Enabled World
AI agents are everywhere now - from coding assistants that can access your entire codebase to customer service bots that can book flights and process refunds. But while we're racing to give LLMs more capabilities, have we stopped to consider the security implications? When your AI can execute code, query databases, and make API calls on behalf of users, the attack surface isn't just bigger - it's fundamentally different.
In this session, we break down the real security risks of AI-enabled applications, from prompt injection attacks that can hijack your agent's tools to data exfiltration through seemingly innocent queries. We'll explore how attackers exploit LLM vulnerabilities, demonstrate practical defensive strategies, and discuss securing tool calls, implementing proper authorization boundaries for agentic systems, and building robust validation layers between your LLM and your critical systems.
By the end of this talk, you'll understand the unique security challenges of AI-powered applications and walk away with a practical framework for threat modeling and securing your LLM integrations. Whether you're building your first AI agent or already running LLMs in production, you'll gain the knowledge to keep your AI-enabled systems secure without sacrificing their power.

Spencer Schneidenbach
Consultant, Microsoft MVP
St. Louis, Missouri, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top