Session

Shipping AI Agents Safely

AI agents are delivering real value in production. They take a goal, plan steps, call tools, and adapt as results come back: retry, branch, or pick a different tool. That small degree of autonomy is a force multiplier, great when the next step is right and costly when it isn’t, from security flaws to data exposure.

This talk is about securing the entire AI agent, including Model Context Protocol (MCP) tools and servers. We treat the agent as a system: planner, tools, memory, retrieval, and orchestration. We focus on controls that hold up under change: least-privilege action scopes, human approvals for risky steps, simple information-flow labels for sensitive data (IFC), retention and redaction by design, and operational guardrails with audits and kill switches. MCP is one solid way to express and enforce these boundaries, not the only one. We focus on what to do next week: tighten action scopes, label sensitive data, add approval paths for risky steps, and log the decisions the agent makes. Mistakes should be contained and diagnosable.

Learning outcomes
* Design least-privilege action scopes and approval points
* Map and control sensitive data flows across tools and memory (IFC)
* Threat-model prompts, retrieval, and tool chains with countermeasures
* Instrument audits, privacy logs, and kill switches that produce evidence

Robert Herbig

AI Practice Lead at SEP

Indianapolis, Indiana, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top