Most Active Speaker

Roelant Dieben

Roelant Dieben

Cloud architect @ Sopra Steria

Lopik, The Netherlands

Actions

With over 20 years of experience developing software on the Microsoft stack, Roelant Dieben has a lot to share about stuff that has been obsolete for years. At Sopra Steria he is helping companies with their Azure cloud and AI challenges, he is a Microsoft Azure & AI MVP, and has a passion for machine learning & AI and application lifecycle management.

Badges

Area of Expertise

  • Information & Communications Technology

Topics

  • GenAI
  • Artifical Intelligence
  • Azure
  • Microsoft Technologies
  • Microsoft MVP
  • Microsoft Azure
  • Agentic AI
  • Agents
  • AI Agents & Multi-Agent Systems
  • Responsible AI
  • C#
  • dotNet
  • MCP
  • Model Context Protocol
  • Model Context Protocol (MCP)

Spec-driven development using Spec Kit

Learn how to build real, production-ready software by turning clear intent into executable code using GitHub’s open-source Spec Kit toolkit.
In this hands-on workshop you’ll be guided through the four core phases of Spec-Driven Development: Specify, Plan, Tasks, and Implement.
Participants will define what to build and why, translate that into a concrete technical plan, break work into reviewable tasks, and collaborate with an AI coding agent of choice, with support options for GitHub Copilot and Claude Code, to implement features while maintaining quality through validation.
By the end of the session you’ll understand how structured specification workflows reduce ambiguity, improve predictability, and lead to better code with less rework. Whether you choose a weather dashboard, a habit tracker, or your own project idea, this workshop gives you practical experience applying spec-driven practices that transform how modern teams build software with AI.

Demystifying AI image generation

Ever wondered how AI turns noise into art? In this demo-heavy session, we’ll take a peek under the hood of AI image generation and actually reconstruct its core ideas step by step.
We’ll start with pure noise and gradually reveal how denoising, autoencoders, and CLIP text embeddings work together to generate coherent, styled images.
Expect not only slides, but also code, visuals, and eureka moments. Whether you’re an ML enthusiast or just curious how AI translates imagination into pixels, you’ll leave with a clear mental model of how image diffusion models "think" and create.

Building Intelligent Search with Azure AI Search

In this developer-focused, hands-on workshop, you’ll build an intelligent search solution using Azure AI Search, powered by a dataset from The Office (US).
We’ll move from raw data to AI-enriched discovery in a practical, end-to-end scenario. Starting with data exploration, you’ll design and implement a robust ingestion pipeline, define and optimize search indexes, and understand how schema decisions affect relevance and ranking.
From there, you’ll enhance your solution with cognitive skills, extracting entities, key phrases, and insights to transform unstructured data into searchable gold.
Finally, we’ll extend the solution using Foundry IQ to add contextual intelligence and deeper reasoning on top of traditional search.
By the end of the session, you’ll understand how to architect, build, and evolve an AI-powered search experience using Azure AI Search.

Understanding Model Context Protocol (MCP)

For the past few years, we have been finding our way when leveraging large language models in our solutions, integrating access to external data sources, and infusing them with tools.

This has led to many challenges along the way, which can now be addressed by using the model context protocol (MCP) for wiring up data and tools.

During his session, we’ll demystify MCP by exploring its purpose, architecture, and practical application. You'll learn how MCP enables precise model behavior tuning, contextual reuse across sessions and personas, and safe delegation of capabilities across teams.

After this session, you will be able to assess whether your LLM projects need the model context protocol.

Stop Vibe Coding. Start Spec-Driven Development with GitHub Spec Kit

You prompt a coding agent. It generates something plausible, compiles cleanly, and still misses the point. The issue isn’t the model. It’s how we’re using it.

GitHub Spec Kit puts a structured specification at the center of your workflow. Before any code is written, you define what you’re building, why it matters, and how it should behave. From there, the agent turns that spec into a plan, a task list, and working code. You guide the intent. It handles the execution.

In this session, you’ll see Spec Kit applied to a real project. We’ll walk through the /specify, /plan, and /tasks workflow with GitHub Copilot, explore where this approach produces more reliable results, and discuss how spec-driven development scales from individual developers to teams.

You’ll leave with a clear understanding of what spec-driven development is, how to use GitHub Spec Kit with Copilot in VS Code, and when this approach is the right fit.

Your Backlog, on Autopilot: GitHub’s Agentic Workflows in Practice

What if you could assign an issue before lunch and come back to a review-ready pull request?

GitHub’s agentic workflows make that possible. The coding agent can spin up an environment, implement a feature, run checks, review its own work, and open a PR. But this is not autonomy. It is structured delegation. You define the boundaries, standards, and expectations, and the agent executes within them.

In this session, we’ll take a real issue from backlog to pull request and examine what actually works today. You’ll see how to configure the Copilot coding agent, define custom agents in your repository, integrate MCP servers, and use Agentic Workflows to express automation in plain Markdown.

You’ll leave knowing how to delegate work effectively, how to define agents that reflect your team’s practices, and where agentic workflows deliver real value along with the guardrails you still need.

Trust, But Verify: Responsible AI and Evaluations in Microsoft Foundry

Shipping an AI feature is easy. Proving that it works, behaves safely, and stays that way over time is much harder.

Most teams treat evaluation as a one-time step before release. In practice, production AI systems require continuous measurement and enforcement. Microsoft Foundry provides the tools to do this properly, including safety evaluators, quality metrics such as groundedness and coherence, agent-level evaluation, and automated red teaming.

In this session, we’ll build a practical evaluation loop using the Azure AI Evaluation SDK and integrate it into a CI/CD pipeline. You’ll see how to detect regressions, measure real-world behavior, and treat responsible AI as an ongoing engineering discipline rather than a compliance exercise.

You’ll leave with a clear approach to implementing quality and safety evaluations, understanding the difference between pre-deployment and continuous evaluation, and connecting these practices into your delivery pipeline.

Agents All the Way Down: Building Multi-Agent Systems on Microsoft Foundry

Single-agent systems are a starting point. Real-world scenarios require coordination: agents that plan, delegate, call tools, invoke other agents, and recover when things go wrong.

Microsoft Foundry provides the building blocks for this, including Agent-to-Agent communication, MCP integration, persistent memory, and control-plane observability. The challenge is not wiring these pieces together. It is making them reliable.

In this session, we’ll design and walk through a multi-agent architecture that coordinates tasks across tools and agents. We’ll examine real failure modes such as coordination drift, tool misuse, and cascading errors, and discuss practical patterns for recovery, state management, and control. We will also look at when multi-agent systems are worth the added complexity and when they are not.

You’ll leave with a practical understanding of how to compose multi-agent workflows, how to observe and govern them, and how to design for reliability in production.

The AI-Native SDLC: From Spec to Production with GitHub and Microsoft Foundry

What does a software development lifecycle look like when AI is a first-class participant?

Not as a chatbot you occasionally consult, but as an integrated part of how you define, build, test, deploy, and operate software.

In this session, we’ll walk through an end-to-end AI-native SDLC. Starting with a structured specification using GitHub Spec Kit, moving through implementation with GitHub Copilot’s coding agent, and into deployment, evaluation, and governance with Microsoft Foundry. We’ll follow a single feature from idea to production to show how these pieces connect into a coherent workflow.

The result is a development model that is faster, more consistent, and easier to reason about, with clear points of control and accountability.

You’ll leave with a concrete mental model of an AI-native SDLC, an understanding of how the tooling fits together, and a way to explain its value to both technical teams and leadership.

Demystifying neural networks

Neural networks are used in so many places all around us and can feel a bit overwhelming when you look into them as a software developer.

During this session, I will demystify neural networks, and give you some hands-on tips on how you can leverage them in your own projects. After the session, you will be able to distinguish the moving parts, how it helps us solve complex problems and how these networks are being trained.

The Dark side of AI

Artificial Intelligence is all around us. It is encapsulated in so many aspects of our day-to-day lives. Our phones, laptops, doorbells, and even lawn movers are enriched with technology to make our lives easier.

These beautiful developments also have a dark side and during this session, we take you on a journey to this dark side and beyond touching upon the very important subjects of diversity and inclusion and how these fit into the use of artificial intelligence.

RAG and Vector Databases Explained: How to hand LLM your Data

Adding your own data to large language models really makes them shine.

Join us for this demo-heavy session where we will explore several ways of handing these models your data, how vector databases are the new cool thing, and how services in Azure, like Azure AI search, help us make these models look even smarter.

This session will immediately get you up to speed with (the power of) RAG and will jump-start your LLM development journey

Tokens 101: Making sense of tokenization in AI

Tokenization is a fundamental concept for large language models, but what exactly is a token, how do tokenizers work, and why would we want to use tokens in the first place?

Join this session and we will unravel the mechanisms behind transforming textual data into machine-understandable formats together.

Through real-world examples and demos, you will grasp the essence of tokenization, the pitfalls, the relevance to prompt engineering, and why it is important to have some understanding of these fundamental building blocks of large language models.

Build your first agent with Azure AI Foundry Agent Service

The Azure AI Foundry Agent Service blends powerful cloud services with a flexible SDK, making it easy to build robust, intelligent agents. In this session, you'll learn how to create your own code-first AI agent.

We'll explore core agent concepts by starting with a simple agent and progressively adding capabilities like reasoning, data analysis, and integration with external data sources.

Whether you're new to agents or looking to go deeper, you'll leave ready to start building your own AI-powered solutions.

Roelant Dieben

Cloud architect @ Sopra Steria

Lopik, The Netherlands

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top