Suzanne Daniels
Chief Developer Advisor at Microsoft
Amsterdam, The Netherlands
Actions
I’m a Chief Developer Advisor at Microsoft, working with CxOs and engineering leaders in EMEA to shape strategy, modernize developer experience, and navigate the shift toward human–agent collaboration. My work sits at the intersection of engineering depth and executive clarity — helping organizations make confident decisions about platforms, AI adoption, and the future of how software gets built.
I’m an engineer first and always. Thirty years in the craft means I still think in systems, constraints, and trade‑offs, and that technical grounding is what makes me effective in the rooms I’m invited into. Whether I’m advising on architecture, developer experience, or organizational design, I bridge hands‑on engineering reality with strategic direction so teams can deliver better, faster, and with more joy.
Area of Expertise
Topics
Platform Engineering when software doesn't wait
Platform Engineering plays an important role as te foundation for AI transformation. Translating decisions and best practices in instructuins for AI agents, ensures the business and the dev teams can safely apply AI and you can agentify the platform. This changes the responsibility of the platform teams, the toolset, but also how they operate.
If coding costs near to nothing, how do we keep software affordable, compliant, governed? When engineering moves beyond Agile, Agentile ways of working might be the asnwer. The concept is familiar: people, process, technology - yet it's different. In this session we're going to discuss all three, and dive into agentic platform engineering concepts (yes, demos!).
9 Suspects, 0 Scripts: What a Murder Mystery Taught Me About Multi-Agent Orchestration
I built a murder mystery game where the suspects lie to you. Not scripted lies — real-time, motivated, adaptive deception from nine concurrent AI agents, each with their own secrets, alibis, and self-preservation instincts. Every playthrough is different because nobody is following a script. And that's exactly where everything interesting broke.
Detective Agentic Mysteries runs 9+ simultaneous GitHub Copilot SDK sessions: suspects, a forensics analyst, a criminal profiler, a narrator, and a director agent orchestrating the world. Each agent has its own tools, state, and objectives. Some agents cooperate. Some actively undermine each other. The cooperative multi-agent demos you've seen at conferences don't prepare you for what happens when Agent 4 contradicts Agent 7, both are partially right, and the player is watching.
I'll walk through the architecture, then focus on what broke: the suspect who invented evidence that didn't exist. The director agent that overrode a rogue character and collapsed narrative coherence. The moment I realised partial information sharing wasn't a feature — it was the entire design problem. Each failure taught me something that transfers directly to enterprise multi-agent systems: how to handle competing objectives, when an orchestrator should intervene versus observe, and how to maintain system coherence without scripting away the autonomy that makes agents useful.
The patterns I found — state isolation, bounded autonomy, orchestrator-as-referee, evidence-based convergence — aren't game patterns. They're system patterns. I know because I've now built two multi-agent systems on them. The detective game is adversarial: agents compete and deceive. Git-Ape, an open-source infrastructure deployment framework, is cooperative: agents collaborate through governance gates. Different domains, same structural problems. State management, agent coordination, knowing when to let an agent run and when to stop it. The patterns transferred directly.
You'll leave with design principles forged in the most adversarial multi-agent environment I could build — and a clear sense of which coordination problems your production system is about to hit.
Key takeaways:
1. Adversarial multi-agent systems expose coordination failures that cooperative systems hide — building both gives you a deeper understanding of orchestration than building either alone.
2. State isolation and bounded autonomy are universal multi-agent patterns — they work whether your agents are lying about a murder or validating a security policy.
3. The orchestrator's job isn't to control agent behaviour — it's to create the structural conditions where independent agents produce coherent outcomes, whether that's a solvable mystery or a safe deployment.
Codifying Trade-offs: Security, Cost, and Compliance as Agent Guardrails
Every agent demo ends the same way: the agent does the thing, the audience claps, nobody asks what happened to the security review.
I didn't just theorize about codifying trade-offs. I built an open-source multi-agent system that does it. Git-Ape (github.com/Azure/git-ape) is a platform engineering framework where specialized agents plan, validate, and deploy Azure infrastructure — and where nothing reaches production without passing through explicit guardrails enforced by the system itself.
Here's how it actually works. A requirements gatherer agent interviews the human. A template generator produces infrastructure-as-code. Then, before anyone confirms anything, a security analyzer runs a blocking gate — deployment is structurally impossible until issues are resolved. A cost estimator prices the deployment so humans confirm with real numbers, not vibes. A Principal Architect agent runs a Well-Architected Framework review across all five pillars. Only after all of that does a human see the full picture and explicitly approve. After deployment, a drift detector closes the evidence loop: did what we deployed stay the way we deployed it?
The key insight isn't that we added checks. It's that we made trade-offs consumable by agents. Security policy isn't a PDF — it's policy-as-code that agents evaluate natively. Cost thresholds aren't guidelines — they're hard constraints. Compliance isn't an audit you do later — it's a gate you pass through now.
I'll walk through the architecture, the failures that shaped it, and the design principles that transfer to any multi-agent system where the stakes are real. If your agents can deploy but can't be told no, you don't have guardrails. You have a demo.
Key takeaways:
1. Trade-offs only function as guardrails when they're structural — blocking gates in the agent workflow, not advisory warnings that can be skipped.
2. Security, cost, and compliance need to be encoded as policy-as-code that agents consume natively, not as human-readable documents agents approximate.
3. The evidence loop matters as much as the execution gate — drift detection and post-deployment validation are what turn a one-time check into continuous governance.
The Decision-Speed Gap: Why Your Agent System Works But Your Organisation Doesn't
Code is merged and deployed by agents in the middle of the night. The decision about *what* to build still takes three meetings and a Slack thread.
This is the decision-speed gap, and it's where quality goes to die.
We've spent the last two years optimising the execution layer. Agents write code, run tests, open PR's, deploy infrastructure. The tooling is genuinely impressive. But the organisational layer above it hasn't changed at all. Approval chains designed for humans moving at human speed now sit on top of systems that move at machine speed. The result isn't faster delivery. It's faster delivery of the wrong thing, or fast systems waiting on slow decisions.
I'll introduce the ISEE framework (Intent, Structure, Execution, Evidence) as a way to think about where agents actually fit in your decision architecture. Intent is the hardest part: who decides what matters, and how fast can that decision propagate? Structure is how you encode constraints so agents don't need to ask. Execution is the part we've already solved. Evidence is what closes the loop, the part almost everyone skips.
This isn't theory. ISEE is built into working software — Git-Ape, an open-source multi-agent platform that plans, validates, and deploys infrastructure through exactly these four stages. The framework came from watching where that system needed human judgment and where it didn't.
I'll show where most organisations are jammed (it's not execution), what actually needs to change (it's not the agents), and why the teams that get this right will outperform by a structural margin — not because their agents are better, but because their decisions are faster.
Key takeaways:
1. The bottleneck in most agent-enabled organisations isn't execution speed — it's decision speed. Optimising the wrong layer makes the gap worse, not better.
2. The ISEE framework (Intent, Structure, Execution, Evidence) gives teams a practical model for identifying where human judgment is essential and where it's just friction.
3. Organisations that redesign their decision architecture around agent capabilities will outperform those that bolt agents onto existing approval chains, and the gap will compound.
The Org Chart Won't Save You: Operating Models for Teams Where AI Does Real Work
Agile assumed all work is done by humans. Sprints, standups, retros, code review — every ceremony was designed around the pace of people coordinating with other people. That assumption held for twenty years. It doesn't hold anymore.
AI maturity is forcing a structural rethink of engineering organisations. Not a tooling upgrade — a rethink of who does what, who decides what, and how fast those decisions can travel. I started writing about this when I noticed that the teams adopting AI agents most aggressively were also the ones where coordination was breaking down fastest. Ceremonies built for human-speed iteration couldn't absorb machine-speed execution. The org chart hadn't changed, but the work inside it had changed completely.
This talk is the operating model that came out of that observation. It covers how product management changes when agents surface architectural trade-offs before sprint planning even starts — the PM doesn't disappear, the PM moves upstream toward intent. How team topology shifts from pipeline stages to autonomous cells that carry full context. How you codify security, cost, and capacity constraints so agents can move fast without humans becoming a bottleneck — or a rubber stamp.
The model draws on what I call the ISEE principles — Intent, Structure, Execution, Evidence — born from 28 years of engineering work and three years of advising engineering leaders on AI-driven development, from coding assistants to agentic platform engineering. You'll see what broke when organisations got the human-AI boundaries wrong, what they changed, and the specific patterns that held up across teams of five and teams of fifty.
You'll leave with a concrete way to evaluate your own org structure against the reality that AI doesn't just write code — it reshapes who decides what, and when they decide it.
Takeaways:
1. A decision-rights map for human-AI teams — a repeatable method to identify which decisions belong to humans, which to agents, and which need both, so you can redesign roles and reporting lines with intent, not
instinct.
2. How the PM, engineering, and platform roles actually shift — the PM doesn't disappear, the PM moves upstream. Engineers shift from execution to orchestration. Platform absorbs what three separate roles used to own.
You'll see where these transitions worked and where they created new friction.
3. The ISEE principles as an org-design diagnostic — Intent, Structure, Execution, Evidence as four questions to assess whether your organisation is set up for AI-native delivery or just bolting agents onto an Agile-era
org chart.
From Brainstorming to Release with M365, Azure and GitHub Copilot
Studies show that up to 60% of engineering time is spent on non-coding tasks: gathering context, navigating documentation, and syncing across tools. We rely on platforms like Microsoft 365 for collaboration and GitHub Copilot to boost productivity and pair program.
But what if these platforms worked together to streamline your entire software development lifecycle?
In this session, Suzanne Daniels will walk through an implementation starting from emails and project notes, ending in production code. You'll learn how to use out-of-the-box features in M365 and GitHub Copilot to keep valuable context intact across your SDLC and go from idea to implementation better, faster and with less context switching.
Engineering Organizations at the Speed of AI: Rethinking Teams, Roles, and Decision-Making
AI is fundamentally reshaping the fabric of engineering organizations. In this session, we’ll explore how agentic workflows, AI-assisted coding, and platform engineering principles are driving a seismic shift in how teams are structured, how product management evolves, and how architectural tradeoffs are surfaced and codified. Agentile Teams are built for this new reality – combining the best of lean, agile, and agentic principles to empower humans and AI to work side by side, faster and smarter than ever before.
Drawing on real-world experiments and industry trends, Suzanne will discuss the new operating models that enable rapid experimentation, early decision-making, and scalable knowledge sharing – without sacrificing security, capacity, or cost control. Leaders will leave with actionable insights on designing resilient, high-velocity organizations ready for the next era of software engineering.
Dear organizer - An introduction to the topic: https://www.linkedin.com/pulse/engineering-beyond-agile-ai-rise-agentile-teams-suzanne-daniels-zhofe
Agentile Teams: Where AI, Platform Engineering, and Human Creativity Redefine Software Delivery
What if your engineering team could move at the speed of AI – without sacrificing quality, security, or control? Enter the Agentile Team: a lean, AI-powered, and platform-enabled evolution of Agile, designed to amplify human creativity.
In this session, we’ll explore how Agentile teams codify architectural tradeoffs (security, capacity, cost) into specs, templates, and guardrails, enabling rapid experimentation and smarter decision-making from day one. Drawing on real-world experiments and platform engineering principles, Suzanne will share practical strategies for developers and platform engineers to thrive in this new era of agentic, high-velocity software delivery.
Dear organizer, an introduction to the topic: https://www.linkedin.com/pulse/engineering-beyond-agile-ai-rise-agentile-teams-suzanne-daniels-zhofe
Fast Flow Conf UK 2026 Sessionize Event Upcoming
DWX 26 Sessionize Event Upcoming
DevBcn 2026 Sessionize Event Upcoming
Experts Live UK 2026 Sessionize Event Upcoming
AI DevSummit + DeveloperWeek Management 2026 Sessionize Event Upcoming
AI Lowlands 2025 Sessionize Event
PlatformCon 2025 Sessionize Event
AgentCon Utrecht 2025 Sessionize Event
Experts Live Denmark 2025 Sessionize Event
Update Conference Prague 2024 Sessionize Event
The DEVOPS Conference Copenhagen & Stockholm 2024 Sessionize Event
Swetugg Göteborg 2024 (Sweden) Sessionize Event
Future Tech 2024 Sessionize Event
DevOpsDays Zurich 2024 Sessionize Event
JCON WORLD 2023 Sessionize Event
DeveloperWeek Enterprise 2023 Sessionize Event
Build Stuff 2023 Lithuania Sessionize Event
Infobip Shift 2023 Sessionize Event
NDC London 2023 Sessionize Event
DevOpsDays Tel Aviv 2022 Sessionize Event
Oπe\n Conf - 2022 Sessionize Event
DeveloperWeek Enterprise 2022 Sessionize Event
Innersource Commons
TechRadarCon22
Building for Developer Happiness (keynote)
Update Conference Prague 2022 Sessionize Event
VMware explore
BackstageCon 2022
MC and co-chair for BackstageCon conference 2022
Devopsdays Eindhoven 2022 Sessionize Event
Little Rock Tech Fest 2022 Sessionize Event
Techorama Netherlands 2022 Sessionize Event
Experts Live Netherlands 2022 Sessionize Event
WeAreDevelopers Live 2022 Sessionize Event
code.talks 2022
Talk: Improving Developer Experience with Backstage
Open Source Summit
Session: Improving Developer Experience with Backstage
Future Tech 2022 Sessionize Event
Umbraco Codegarden 2022 Sessionize Event
Techorama 2022 BE Sessionize Event
TEQnation 2022 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top