Session

"Please Don't Delete My Repo": Why You Can't Trust Your AI Coding Agent (And What to Do About It)

Telling your AI coding agent "please don't commit directly to main" is about as effective as asking your cat not to knock things off the table. LLMs are non-deterministic by nature - a polite prompt instruction is not a security boundary.
In this session, you'll learn how to enforce what you can't simply ask for. We'll dive into lifecycle hooks that intercept tool calls before they execute, blocking unauthorized Git commits, file deletions, or rogue shell commands at the architectural level. We'll also explore how to run coding agents autonomously and safely inside sandboxes - so your agent can do real work without being a liability.

Whether you're building agentic workflows or just trying to stop your AI colleague from pushing directly to production: this talk gives you the controls that prompts never will.

Target audience: Developers, software architects, and technical leads working with or evaluating AI coding agents — no prior security background required.
Preferred session duration: 45–60 minutes
Technical requirements: Projector/screen and internet
Content: The session includes live demonstrations of hook-based tool interception (Claude Code lifecycle hooks) and sandboxed agent execution. Code examples are framework-agnostic but reference Claude Code, GitHub Copilot, and common MCP patterns.

Daniel Sogl

Microsoft MVP & Software Architect | Empowering developers with AI

Karlsruhe, Germany

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top