Session
From Assistant to Actor: The New Security Risks of Coding Agents
Over the last year, AI coding assistants have evolved into agentic systems that plan, act, and make changes across the developer workflow. These agents do not just suggest code anymore. They run tools, modify configs, manage dependencies, open pull requests, and sometimes fix issues end to end with minimal human input. This shift dramatically expands the attack surface. Prompt injection now targets agents instead of chat boxes, poisoned repositories influence multi step decisions, and a single compromised instruction file can steer an agent into leaking secrets, weakening security controls, or introducing backdoors while appearing helpful and correct.
This session looks at what agentic AI means for developer security in practice. We will break down how autonomous and semi autonomous coding agents fail, where trust in automation goes too far, and why traditional secure coding guidance is no longer enough. The focus is on concrete scenarios teams are already facing and on pragmatic guardrails that keep agents useful without giving them unchecked power. The goal is to help security and engineering teams work with agentic AI in a way that scales productivity while keeping control, visibility, and accountability firmly in place.
- From autocomplete to agents: what actually changed
- How agentic workflows fail: prompt injection, poisoned context, over-permissioned tools
- Where security responsibility breaks down between humans and agents
- Realistic guardrails: permissions, boundaries, validation, and human checkpoints
- What secure AI-native development should look like going forward
Maxim Salnikov
AI Dev Tools & Platforms Solution Engineer at Microsoft, Tech Communities Lead, Keynote Speaker
Oslo, Norway
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top