Session
AI Security Engineer Foundations + Certificate
This 4-hour workshop covers 6 modules, each about a half-hour in length. There are 2 15-minute breaks.
The modules are:
1. Securing Vibe Coding
2. AI Threat Modeling
3. AI Red Teaming
3a. Hack a chatbot
3b. Securing the Agent Skills Ecosystem
4. AI Governance
4a. Addressing Shadow AI
4b. OWASP Top 10 for LLM
1. Securing Vibe Coding
As AI coding tools become embedded in daily development, they bring a new wave of productivity, and new security risks. In this session, we break down the security implications of Vibe Coding and share actionable strategies to secure AI-generated code at scale.
2. AI Threat Modeling
We are witnessing a fundamental shift in application security. Traditional tooling is built to find bugs in deterministic code. But generative AI systems do not fail at the level of code. They fail at the level of behavior. In this interactive session, we travel from traditional threat modeling activities to the new level of thinking required for threat modeling in the AI age. We take a real world chatbot and build and test a threat model together.
3a. AI Red Teaming: Hack a chatbot
In the AI age, traditional red teaming activities have expanded beyond deterministic systems like infrastructure, code, and auth flows. You're now also emulating adversaries against probabilistic systems whose behavior can be manipulated through language, context, and intent, and where the attack surface shifts every time the model, the prompt, or the data changes. In this interactive session, we introduce red teaming activities in the context of AI models, we hack a chatbot together - CTF style, and we examine how Snyk’s red team tool can automate these activities.
3b. AI Red Teaming: Securing the Agent Skills & MCP Ecosystem
This session explores the emerging security threats targeting AI agent ecosystems, with a focus on two interconnected attack surfaces: malicious Skills in agent registries and vulnerabilities in MCP (Model Context Protocol) servers. The first half dives into "ToxicSkills" research, demonstrating how weaponized SKILL.md files can enable supply-chain attacks that exfiltrate credentials or grant shell access through seemingly innocent Markdown instructions. The second half shifts to the broader MCP threat landscape, covering everything from insecure server code and tool poisoning attacks to indirect prompt injection targeting IDEs like Cursor and apps like Claude Desktop — with live exploitation demos throughout. Attendees will leave with a comprehensive understanding of how these attack vectors work in practice, along with actionable best practices for securing their own MCP deployments.
4a. AI Governance: Addressing Shadow AI
For years, security teams have been addressing Shadow IT. That is, bits and pieces of tech throughout the software development lifecycle that aren’t well understood and, in some cases, are explicitly not allowed in an organization, but are still be used on development teams. In the AI age, it’s even harder to understand the full breadth of AI component usage in an application. There can be a variety of models, data sets, MCP servers, agents and more - all within a single application.
4b. AI Governance: OWASP Top 10 for LLM
This session brings the OWASP Top 10 for LLMs to life through real-world horror stories — cautionary tales of what happens when these vulnerabilities are exploited in the wild, drawn from some of the most notorious AI security incidents to date.
At the end of the session, you are issued an official Sny Certificate of Completion for the Foundations workshop.
Note that while some of the Snyk tools are demoed, this is not a vendor specific workhop. A vendor specific (Snyk) version of this workshop can be given as part of a sponsor package that includes a vendor workshop.
Micah Silverman
Director, AI Acceleration @ Snyk
Virginia Beach, Virginia, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top