Session
Agentic Threat Modeling and Mitigation Strategies
Agentic Threat Modeling and Mitigation Strategies focuses on securing AI systems that can reason, plan, and take actions using tools. It treats reasoning itself as an attack surface and designs controls so that hallucinations, poisoned context, or incorrect plans cannot turn into real‑world damage. The goal is to ensure that agents can propose intelligently, but systems execute safely through strict boundaries, permissions, validation, and observability.
This session includes a technical deep dive and assumes basic familiarity with GenAI, LLMs, and agentic systems. Attendees should have an architectural or engineering background, ideally working with enterprise or production AI systems.
Target audience includes solution architects, platform engineers, security architects, senior developers, technical leaders, and anyone responsible for designing or governing agentic AI systems in enterprise environments.
The session is suitable for conferences, internal architecture forums, security briefings, or technical leadership summits. It is appropriate for first public delivery as well as executive or engineering roadmaps.
Vishal Chaudhari
Mastercard, Principal Software Engineer
Pune, India
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top