Session

Agents With Seatbelts: Practical Ways to Keep AI Code Gen Under Control

In this session I will share **best practices** for using MCP setups and AI **Agents** in real development workflows 🤖. The goal is to improve both **quality** and **security**, since even with all the excitement around these tools, it’s still very easy to create confusing interactions, accidental data exposure, or Agents that trigger actions we never intended.

I will also present **reports comparing how different LLM models generate code**, showing clear differences in stability, output consistency, and the amount of guidance each model needs to avoid mistakes. Along with that, I’ll bring recent **figures showing the surprisingly low productivity gains** many teams report when adopting AI tools—often just small improvements instead of the big jumps everyone expects. We will look at why this happens: unclear prompts, weak tool design, loose permissions, and wrong assumptions about what the models can really handle.

The talk focuses on practical habits that developers can apply right away: tightening tool access, simplifying and focusing context, adding lightweight validation steps, and using safety patterns that prevent the common failures. My goal is to help teams build **more reliable, productive, and secure AI workflows**, making AI code generation a real benefit rather than a risky experiment 😅.

A fast-paced look at how to boost **quality**, **security**, and real productivity when using AI Agents and MCP. I’ll show where code-gen models fail, why gains stay low, and the simple practices that keep AI powerful but under control 🤖.

Jonathan Vila

Staff Developer Advocate at Sonar

Barcelona, Spain

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top