BEGIN:VCALENDAR
CALNAME:AI Engineer Paris 2025
NAME:AI Engineer Paris 2025
PRODID:-//github.com/ical-org/ical.net//NONSGML ical.net 4.0//EN
VERSION:2.0
X-WR-CALNAME:AI Engineer Paris 2025
BEGIN:VTIMEZONE
TZID:Romance Standard Time
X-LIC-LOCATION:Europe/Paris
BEGIN:STANDARD
DTSTART:20241027T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20250330T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
DESCRIPTION:
DTEND:20250923T200000
DTSTAMP:20260408T142705Z
DTSTART:20250923T160000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Registration & Expo Opening
UID:SZSESSIONff7e2b48-8c42-439b-8b42-0cb016032f44
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speakers: Ben Dunphy\, Raouf Chebri\, Shawn Wang\, Yann Leger
DTEND:20250923T180000
DTSTAMP:20260408T142705Z
DTSTART:20250923T173000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Welcome Keynote
UID:SZSESSION1039070
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Lélio Renard Lavaud\n\nOpen source AI isn’t just a tr
 end\, it’s the key to unlocking enterprise-wide transformation. Adopting 
 AI at scale requires overcoming vendor lock-in\, data complexity\, and tr
 ansparency gaps. This keynote explores how Mistral AI’s open-source model
 s provide the control\, customization\, and reliability enterprises need 
 to integrate AI effectively\, and drive real business outcomes.
DTEND:20250923T183000
DTSTAMP:20260408T142705Z
DTSTART:20250923T180000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:How open source drives successful enterprise adoption
UID:SZSESSION1038760
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Catch the welcome keynote\, mingle with other conference atten
 dees\, and enjoy hors d'oeuvres
DTEND:20250923T200000
DTSTAMP:20260408T142705Z
DTSTART:20250923T183000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Welcome Reception
UID:SZSESSION2efcb63f-2d77-45f7-8a9b-4ad8feb4feb5
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:
DTEND:20250924T190000
DTSTAMP:20260408T142705Z
DTSTART:20250924T080000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Registration
UID:SZSESSION2cc70a10-6473-45eb-927c-805ea5006b89
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:
DTEND:20250924T094500
DTSTAMP:20260408T142705Z
DTSTART:20250924T093000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Event Kickoff
UID:SZSESSION1aa7c69c-4679-445e-8bc4-232cb872ae14
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Emil Eifrem\n\nEmil Eifrem\, founder of Neo4j\, has s
 pent two decades building databases and obsessing over the field of knowl
 edge representation. In this talk\, he brings that lens to AI engineering
 : what's the current state of managing state in AI applications\, what pa
 tterns of state management are emerging in the wild and\, based on hundre
 ds of AI deployments across startups and enterprises\, a perspective on w
 here the data layer of the AI stack is headed.\n
DTEND:20250924T101500
DTSTAMP:20260408T142705Z
DTSTART:20250924T094500
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:The State of^H^Hin AI Engineering
UID:SZSESSION990515
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speakers: Aleksandar Mitic\, Jo Kelly-Fenton\n\nWe don't need 
 LLMs to write new code. We need them to clean up the mess we already made
 .\n\nIn mature organizations\, we have to maintain and migrate the existi
 ng codebase. Engineers are constantly balancing new feature development w
 ith endless software upkeep.\n\nBut what if you could rewrite your codeba
 se\, every single day\, across thousands of repositories? What if your en
 gineers didn't have to maintain their code?\n\nAt Spotify\, we are seeing
  early success using LLMs to perform predictable\, repeatable and effortl
 ess code migrations.\n \nIn this talk\, we’ll share how we created an Age
 ntic Migrator that has gotten over a 1000 PRs merged across several engin
 eering disciplines. We will tell you how we reason about solving the comp
 lexity of LLMs maintaining code at scale. From managing build feedback lo
 ops across thousands of repos\, to evaluating prompt effectiveness and ma
 stering the sheer complexity of our diverse codebase.\n
DTEND:20250924T103000
DTSTAMP:20260408T142705Z
DTSTART:20250924T100000
LOCATION:Founder's Cafe
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Rewriting all of Spotify's code base\, all the time.
UID:SZSESSION991217
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Pierre Burgy\n\nAI evaluations are broken. Benchmark 
 scores look good on paper\, but they rarely translate into user value. Wh
 at matters isn’t how your model performs on curated test sets—it’s what y
 our users actually use. I’ll walk through why most AI evals are a distrac
 tion\, how we built “vibe benchmarks” by watching usage patterns\, and ho
 w qualitative feedback loops are often more scalable than gold labels. If
  you’re serious about shipping AI\, this talk will change how you measure
  success.
DTEND:20250924T103000
DTSTAMP:20260408T142705Z
DTSTART:20250924T100000
LOCATION:Junior Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Vibe > Benchmarks: Rethinking AI Evaluation for the Real World
UID:SZSESSION981365
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Ogi Bostjancic\n\nIn this beginner-friendly workshop\
 , we will develop an AI agent that analyzes GitHub open source contributi
 ons and assigns RPG-style attribute levels and character classes. We’ll g
 o over basic agent development practices like tweaking system prompts\, b
 uilding reliable tools\, and orchestrating simple decision flows. We’ll a
 lso touch on observability: how to monitor agent behavior\, track key sig
 nals\, and make sure it stays on track.
DTEND:20250924T110000
DTSTAMP:20260408T142705Z
DTSTART:20250924T100000
LOCATION:Central Room
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Open Source Champions: Gamify GitHub Contributions with an AI Agent
UID:SZSESSION1028375
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Keirah Dein
DTEND:20250924T102000
DTSTAMP:20260408T142705Z
DTSTART:20250924T101500
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Democratizing AI Agents: Building\, Sharing\, and Securing Made Si
 mple
UID:SZSESSION1036398
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Martin Woodward\n\nThe MCP Protocol is evolving rapid
 ly. Building one that scales\, evolves at the same pace as the community 
 while also being secure is hard. Get the latest from GitHub's Martin Wood
 ward as he shares the lessons they have learned from building one of the 
 most popular MCP servers in use today.
DTEND:20250924T110000
DTSTAMP:20260408T142705Z
DTSTART:20250924T103000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Building MCP's at GitHub Scale
UID:SZSESSION991456
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Bertrand Charpentier\n\nAI models become more complex
 \, the cost of inference—both in terms of computation and energy—continue
 s to rise. In this talk\, we will explore how combining compression techn
 iques such as quantization\, pruning\, caching\, and distillation can sig
 nificantly optimize model performance during inference. By applying these
  methods\, combining compression make possible to reduce model size and c
 omputational load while maintaining quality\, thus making AI more accessi
 ble and environmentally sustainable.
DTEND:20250924T110000
DTSTAMP:20260408T142705Z
DTSTART:20250924T103000
LOCATION:Founder's Cafe
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:How to make your AI models faster\, smaller\, cheaper\, greener?
UID:SZSESSION991543
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Merrill Lutsky\n\nMost AI agents are built to write c
 ode. Reviewing it is a harder\, more nuanced challenge. It requires askin
 g questions\, identifying risk\, understanding architecture\, and knowing
  when something doesn’t feel right. In other words\, it requires judgment
 .\n\nIn this talk\, we’ll walk through how we built Chat\, our agent for 
 code review\, by modeling how senior engineers approach the task. That in
 cludes how they pull context from the current PR\, the surrounding codeba
 se\, historical changes\, and broader team conventions. We’ll share how w
 e designed the system to decide which context to reach for\, how we use e
 vals to measure useful behavior\, and why reviewing code requires a compl
 etely different agentic workflow than generating it.\n\nThis is not a tal
 k about training models. It’s a deep dive into behavior design\, context 
 orchestration\, and the real-world lessons that shaped how we built and s
 hipped Chat.
DTEND:20250924T110000
DTSTAMP:20260408T142705Z
DTSTART:20250924T103000
LOCATION:Junior Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Inside Chat: how we taught AI to review code like a senior engineer
UID:SZSESSION991371
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:
DTEND:20250924T113000
DTSTAMP:20260408T142705Z
DTSTART:20250924T110000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Morning Break
UID:SZSESSIONb882073e-e321-4397-b8d6-e3cb35dc0398
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Miguel Betegón\n\nLet's live-fix a slow agent togethe
 r. In this demo we'll see how to use Sentry's AI agent monitoring to ship
  agents with confidence.\n
DTEND:20250924T111000
DTSTAMP:20260408T142705Z
DTSTART:20250924T110500
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Live Debugging AI Agents
UID:SZSESSION1036410
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Srilakshmi Chavali\n\nConversations with AI play out 
 over many turns\, yet most evaluations stop at single responses. In this 
 lightning talk\, we’ll explore how session-level evaluations in Arize AX 
 shift the focus to the entire interaction. This approach lets teams desig
 n evaluations that capture qualities such as accuracy\, goal completion\,
  or user frustration—surfacing patterns across conversations that single-
 turn checks miss. By looking at the full flow\, practitioners gain a more
  realistic view of how their AI behaves in practice.
DTEND:20250924T111500
DTSTAMP:20260408T142705Z
DTSTART:20250924T111000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Beyond Single Turns: Evaluating AI Agents at the Session Level
UID:SZSESSION1033214
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Yann Leger\n\nThe LLM and GPU gold rush is over. What
  comes next is the rise of agentic workflows and with them\, a more diver
 se\, resilient infrastructure built on a mix of GPUs\, accelerators\, and
  good old CPUs.\n\nAI agents are reshaping what infrastructure must deliv
 er. They don’t just consume compute\; they demand fast\, secure sandboxed
  environments\, continuous inference at scale\, and seamless interaction 
 across heterogeneous accelerators\, memory\, and storage systems.\n\nIn t
 his talk\, we’ll map out the state of infrastructure for agents and infer
 ence: the technical building blocks\, the trade-offs from chips to virtua
 lization and storage\, and the broader shifts needed to make AI infrastru
 cture a true foundation for the agentic era.
DTEND:20250924T120000
DTSTAMP:20260408T142705Z
DTSTART:20250924T113000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Building for the Agentic Era: The Future of AI Infrastructure
UID:SZSESSION1031329
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Robert Brennan\n\nToday's agents are best at small\, 
 atomic coding tasks. Much larger tasks--like major refactors and breaking
  dependency updates--are highly automatable but hard to one-shot.\n\nIn t
 his session\, we'll discuss patterns for orchestrating large-scale code c
 hanges with swarms of agents and a human in the loop.\n\nWe'll also work 
 through a concrete example: migrating an entire codebase from one React s
 tate management library to another.
DTEND:20250924T120000
DTSTAMP:20260408T142705Z
DTSTART:20250924T113000
LOCATION:Founder's Cafe
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Automating massive refactors with parallel agents
UID:SZSESSION983384
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Marlene Mhangami\n\nVS Code is the most popular code 
 editor in the world\, and combined with GitHub Copilot\, it provides AI E
 ngineers the opportunity ton speed up their workflows with AI. In this ta
 lk we'll explore what it looks like to build an MCP server for VS Code. W
 e'll explore prompts\, tools\, resources and sampling\, and understand ho
 w these can be used to with high or low autonomy. We'll also explore some
  of the security issues associated with MCP and how using the Azure AI In
 ference API can mitigate them.
DTEND:20250924T120000
DTSTAMP:20260408T142705Z
DTSTART:20250924T113000
LOCATION:Junior Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Building MCP Servers for VS Code
UID:SZSESSION990296
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speakers: Djordje Lukic\, Jean-Laurent de Morlhon\, Mat Wilson
 \n\nJoin us for a hands-on workshop where you’ll learn to orchestrate AI 
 agent teams that collaborate like real experts using Docker’s cagent.\n\n
 In this session\, we’ll move beyond single-model interactions to create s
 ophisticated multi-agent systems where specialized AI agents work togethe
 r\, delegate tasks intelligently\, and leverage external tools through th
 e Model Context Protocol (MCP).\n\nWhat You’ll Learn:\n\nDesign and confi
 gure specialized AI agents with distinct roles and capabilities\n\nImplem
 ent smart delegation patterns between agents for complex problem-solving\
 n\nIntegrate external tools and APIs using MCP servers (local and remote)
 \n\nLeverage built-in tools like memory\, task management\, and reasoning
  capabilities\n\nDeploy agent configurations using Docker Hub for team co
 llaboration\n\nSwitch seamlessly between AI providers (OpenAI\, Anthropic
 \, Google\, and Docker Model Runner)\n\nHands-On Activities: Participants
  will build a working multi-agent system starting from a simple assistant
  and evolving it into a coordinated team. We’ll create agents that can re
 member context\, manage tasks\, access filesystems\, search the web\, and
  delegate work to specialists - all configured through simple YAML files.
 \n\nWho Should Attend: Developers\, architects\, and engineering leaders 
 interested in practical AI agent orchestration\, whether for automating w
 orkflows\, building AI-powered applications\, or exploring the future of 
 collaborative AI systems.\n\nPrerequisites:\n\nBasic understanding of YAM
 L configuration\n\nFamiliarity with API concepts\n\nLaptop with Go 1.24+ 
 installed (or ability to use prebuilt binaries)\n\nAPI key from at least 
 one provider (OpenAI\, Anthropic\, or Google)\n\nKey Takeaway: You’ll lea
 ve with a working multi-agent system\, ready-to-use configuration templat
 es\, and the knowledge to design and deploy your own AI agent teams for r
 eal-world applications.
DTEND:20250924T123000
DTSTAMP:20260408T142705Z
DTSTART:20250924T113000
LOCATION:Central Room
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Building Intelligent Multi-Agent Systems with docker cagent: From 
 Solo AI to Collaborative Teams
UID:SZSESSION1025259
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Andreas Blattmann\n\nImage generation is amazing\, bu
 t editing? Most approaches either sacrifice quality for speed\, or you ge
 t inconsistent results when users make iterative edits. Inference times i
 s also a major bottleneck for real-world applications.\n\nIn this talk\, 
 Andreas will share how Black Forest Labs solve these challenges with FLUX
 .1 Kontext. You'll learn exactly how Latent Flow Matching enables consist
 ent iterative editing\, and the secrets behind Adversarial Diffusion Dist
 illation\, the technique that allows us to achieves near real-time infere
 nce enabling editing that solves the consistency problem.
DTEND:20250924T123000
DTSTAMP:20260408T142705Z
DTSTART:20250924T120000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Inside FLUX\, How It Really Works
UID:SZSESSION991327
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Miguel Betegón\n\nIs MCP a thing? A lot of companies 
 are still wondering. It CAN be a thing\, or you can spend your AI budget 
 chasing your tail.\n\nThis talk brings clarity on the path we at Sentry f
 ollowed to get to 30M requests/month on our MCP server and got us into th
 e Microsoft Build keynote\, Anthropic keynote\, etc. From our latest outa
 ge to building our own monitoring.
DTEND:20250924T123000
DTSTAMP:20260408T142705Z
DTSTART:20250924T120000
LOCATION:Founder's Cafe
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:MCP isn’t good yet we got to 30M requests/month
UID:SZSESSION991559
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Yves Brissaud\n\nAI-powered coding agents are everywh
 ere. They help us write boilerplate and boring code\, surprise us by gene
 rating features\, or even build entire applications. And this is more tha
 n a passing trend: agents are already part of our daily workflow.\nBut th
 e unwritten aspect is that our role as developers is shifting. Our code i
 s no longer written by us alone. We now need to review\, orchestrate\, in
 tegrate the work of multiple autonomous agents\, sometimes across multipl
 e codebases. In a sense\, we are becoming something that once sounded out
 dated: integrators.\nTo help us in this new\, critical\, role\, we need t
 ools. We need local Continuous Integration tools: the kind that also inte
 grates well with coding agents.\nAnd the good news is those tools already
  exist in the open-source world\, container-use to offer a proper isolate
 d environment for coding agents\, and dagger to continuously integrate th
 e generated code.
DTEND:20250924T123000
DTSTAMP:20260408T142705Z
DTSTART:20250924T120000
LOCATION:Junior Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:The rise of local CI tooling. Thanks AI coding agents!
UID:SZSESSION984106
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Tomaz Bratanic\n\nJoin us at the Neo4j booth!\n\n​RAG
  works - until complex context kicks in. GraphRAG upgrades it by weaving 
 in knowledge graphs to structure retrieval\, boost relevance\, and enable
  explainable\, precise generation. Learn how it fuses symbolic reasoning 
 with neural search to power next-gen\, context-aware AI.
DTEND:20250924T122000
DTSTAMP:20260408T142705Z
DTSTART:20250924T120000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Agentic GraphRAG: Context that Connects
UID:SZSESSION1040931
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Andreas Kollegger\n\nJoin us at the Neo4j booth!\n\nW
 alk with me along some agentic workflows where LLMs and graphs meet. Grap
 hs are great\, but building them can be daunting. This is a marvelous tas
 k for a team of agents\, specialized in each aspect of knowledge graph co
 nstruction.
DTEND:20250924T124500
DTSTAMP:20260408T142705Z
DTSTART:20250924T122500
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Agentic Knowledge Graph Construction
UID:SZSESSION1040992
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:
DTEND:20250924T140000
DTSTAMP:20260408T142705Z
DTSTART:20250924T123000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Lunch Break
UID:SZSESSIONb32d694b-cff8-466f-a170-342a53bbfbc7
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Stephen Batifol\n\nStep inside the FLUX family - from
  open-weights you can fine-tune and customize\, to advanced models built 
 for high-quality results out of the box. In this interactive workshop\, w
 e’ll break down the differences between FLUX [dev]\, FLUX [pro]\, and FLU
 X Kontext\, and explore where each shines. You’ll learn how to prompt eff
 ectively\, use references for control\, and move from first image generat
 ion to editing and transformation. Expect live demos\, hands-on experimen
 tation\, and practical techniques you can take straight into your own pro
 jects.
DTEND:20250924T134500
DTSTAMP:20260408T142705Z
DTSTART:20250924T124500
LOCATION:Central Room
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Lunch & Learn - Inside FLUX: From Open-Weights to Advanced Models
UID:SZSESSION1028380
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Paige Bailey\n\nGoogle DeepMind has been at the foref
 ront of AI's biggest research breakthroughs. This session demonstrates ho
 w that cutting-edge research is now accessible to every engineer. Discove
 r how the advanced reasoning and context window of the Gemini models are 
 moving from the lab to live production.
DTEND:20250924T125000
DTSTAMP:20260408T142705Z
DTSTART:20250924T124500
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:From Research to Reality with Google DeepMind
UID:SZSESSION1040287
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Paul-Louis Nech\n\nLearn how you can use Algolia to p
 ower AI Agents with RAG\, Recommendations\, and more.
DTEND:20250924T125500
DTSTAMP:20260408T142705Z
DTSTART:20250924T125000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:An intro to Algolia Agent Studio
UID:SZSESSION1037299
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Eric Duffy\n\nJoin to learn about novel AI accelerato
 rs with Tenstorrent\n\nTaking place at the Koyeb booth
DTEND:20250924T132000
DTSTAMP:20260408T142705Z
DTSTART:20250924T130000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Fireside Chat with Tenstorrent
UID:SZSESSION1040885
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Djordje Lukic
DTEND:20250924T133000
DTSTAMP:20260408T142705Z
DTSTART:20250924T132000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:How a Docker Engineer Automated Their Way to an Agent Framework
UID:SZSESSION1036919
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Zach Blumenfeld\n\nThe biggest challenge in building 
 reliable AI agents isn't the LLM—it's context management. Most developers
  struggle with the same problem: there's no consistent way to fit the rig
 ht information into LLM context windows\, making agent workflows fragile 
 and complex. This talk demonstrates how graph-based context engineering s
 olves this by shifting complexity from your application code to the data 
 layer. We'll explore how modeling context as connected data enables agent
 s to naturally traverse relationships and perform multi-hop reasoning - d
 elivering faster\, more accurate retrieval\, maintaining persistent memor
 y across sessions\, and enabling agents that grow smarter as your data an
 d requirements evolve.\nThrough practical examples\, you'll see how graph
  structures transform agents from brittle prototypes into intelligent sys
 tems capable of explainable reasoning and reliable execution. You'll leav
 e with concrete patterns for implementing graph-based context engineering
  that makes your agents genuinely smarter and more dependable.
DTEND:20250924T134000
DTSTAMP:20260408T142705Z
DTSTART:20250924T133000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Context Engineering with Graphs for More Intelligent Agents
UID:SZSESSION1036236
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Laurent Sifre\n\nThe future of AI won’t be built behi
 nd closed doors. Open source is essential to creating the strike point wh
 ere innovation can scale without prohibitive costs\, and without relying 
 solely on ever-larger\, power- and funding-hungry models. But efficiency 
 alone isn’t enough: real progress happens when developers can freely expe
 riment\, combine\, and extend the “bricks” that make up modern AI systems
 .\n\nIn this talk\, Laurent will explain why open source is central to AI
 ’s next wave\, how it accelerates iteration and adoption\, and why giving
  developers the freedom to play with the building blocks matters. He will
  also talk about a new portal that brings these bricks together in one pl
 ace — making it easier than ever for builders to discover\, test\, and as
 semble the technologies shaping the next generation of AI.
DTEND:20250924T143000
DTSTAMP:20260408T142705Z
DTSTART:20250924T140000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Assembling the Future: Open Source Bricks for the Next Generation 
 of AI
UID:SZSESSION1027575
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Daniel Homola\n\nAI agents are evolving beyond APIs t
 o navigate the same graphical interfaces humans use every day.\nWhat if l
 arge language models could power agents to operate applications as seamle
 ssly as humans\, unlocking automation in domains where APIs fall short?\n
 In this talk\, we will explore why GUI agents matter\, compare API-based 
 and GUI-based approaches\, and share practical insights from building an 
 LLM-based GUI agent.
DTEND:20250924T143000
DTSTAMP:20260408T142705Z
DTSTART:20250924T140000
LOCATION:Founder's Cafe
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:LLM-Based GUI Agents: Bridging Human Interfaces and Autonomous AI
UID:SZSESSION991650
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Rémi Louf\n\nEvery open-source model reinvents functi
 on calling. Some spit out XML tags\, others JSON with custom delimiters\,
  some hide calls in markdown\, and many just hallucinate new syntaxes alt
 ogether. Add schema violations\, phantom parameters\, and jumbled optiona
 l fields\, and half your engineering time vanishes into parsing hacks and
  wishful thinking.\n\nBut it doesn’t have to be this way. I will show how
  our new library .lambda  uses constrained decoding to keep calls within 
 schema\, handle optional and required fields correctly\, and support para
 llel or nested calls without special-case hacks. All with near-zero overh
 ead\,  regardless of the model.\n
DTEND:20250924T143000
DTSTAMP:20260408T142705Z
DTSTART:20250924T140000
LOCATION:Junior Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Function calling that doesn’t suck
UID:SZSESSION1039874
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Adam Cowley\n\nIn this course\, you will learn how Ne
 o4j and Knowledge Graphs can help you create Generative AI (GenAI) applic
 ations.  We will explore where semantic search falls short\, how relation
 ships provide context to text chunks\, and how LLMs can convert natural l
 anguage into database queries that produce deterministic results.
DTEND:20250924T150000
DTSTAMP:20260408T142705Z
DTSTART:20250924T140000
LOCATION:Central Room
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Hands-on GraphRAG
UID:SZSESSION1035842
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Andreas Kollegger\n\nGenerating data is as easy as ge
 nerating code\, with the same joys and tribulations. Raise hands for a su
 bject and we'll yolo some data together.
DTEND:20250924T143000
DTSTAMP:20260408T142705Z
DTSTART:20250924T142500
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Vibing With Data
UID:SZSESSION1036239
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Vaibhav Srivastav\n\nIn this talk\, VB from Hugging F
 ace will share the latest trends shaping open large language models in 20
 25 — from new model releases and adoption patterns to the challenges of s
 caling and regulation. The session will highlight where open-source is th
 riving\, what hurdles remain\, and what engineers should expect next.
DTEND:20250924T150000
DTSTAMP:20260408T142705Z
DTSTART:20250924T143000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:State of Open LLMs in 2025
UID:SZSESSION1026749
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Alberto Castelo\n\nYou've picked your model\, written
  your prompts\, but your AI still hallucinates about user data. Welcome t
 o context engineering—the discipline of choosing what information to feed
  your LLM and when. Through real examples from Shopify Sidekick\, we'll e
 xplore how the right context transforms mediocre outputs into magic. Lear
 n our framework for context selection\, strategies for working within tok
 en limits\, and how we dynamically compose context based on user intent. 
 This talk will change how you think about LLM inputs and show why context
  engineering might be the highest-leverage skill in production AI systems
 .
DTEND:20250924T150000
DTSTAMP:20260408T142705Z
DTSTART:20250924T143000
LOCATION:Founder's Cafe
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Context Engineering: The Art of Feeding LLMs
UID:SZSESSION989665
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Robin Nabel\n\nEveryone starts with artisanal model b
 uilding: one-off scripts\, manual runs\, lost lineage. We built a factory
  instead. In this talk\, I'll show how poolside transforms research ideas
  into validated results at large scale: 500+ composable assets with immut
 able lineage\, orchestration across 10K H200s\, and RL from code executio
 n feedback on a million containerized repos. You'll see why a new joiner 
 shipped novel agent behaviors in a week\, how we caught a subtle logprobs
  bug that broke RL\, and why training the machine that trains the models 
 - not just scaling parameters - is the real competitive edge.
DTEND:20250924T150000
DTSTAMP:20260408T142705Z
DTSTART:20250924T143000
LOCATION:Junior Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:From Artisanal Training to Foundation Model Factory
UID:SZSESSION1039876
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Aparna Dhinakaran\n\nHumans aren’t frozen in the way 
 they think\; why\, then\, are system prompts static today? In order for a
 gents to learn and adapt\, they must be able to update the system prompts
  themselves. \n\nIn this session\, we will release data from new experime
 nts on how agents can pick up explanations of fixes and annotations and b
 uild out instruction updates for system prompts – with demos showing thes
 e techniques used in real agent environments\, from code agents to gaming
  agents. \n\nSimilar to how humans learn what to do from their environmen
 t\, this approach uses feedback to improve and drive an agent. \n\nWith t
 his approach: prompts evolve\, natural language feedback = error signal\,
  a MetaPrompt rewrites/reinserts targeted instruction\, and agents run a 
 prompt learning loop post deployment -- allowing them to continuously imp
 rove upon themselves online.\n
DTEND:20250924T153000
DTSTAMP:20260408T142705Z
DTSTART:20250924T150000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:System Prompt Learning for Agents
UID:SZSESSION986403
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Jesús Espino\n\nIf you work in healthcare\, finance\,
  or government\, running an AI coding agent in your development environme
 nt can be risky. In these environments\, safety\, control\, and complianc
 e aren’t optional. They’re required. But what if you could build an AI ag
 ent that works with all those rules\, not against them?\nIn this talk\, w
 e'll describe how we built Ona\, Gitpod’s programming agent\, to run full
 y isolated inside secure development environments. We’ll cover how isolat
 ion\, auditability\, and reproducibility are achieved in Ona\, and how we
  provide all these capabilities without customer data ever leaving their 
 infrastructure. You’ll learn how to design agents that are safe to use\, 
 even in places where “move fast and break things” is not an option.
DTEND:20250924T153000
DTSTAMP:20260408T142705Z
DTSTART:20250924T150000
LOCATION:Founder's Cafe
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:How We Built an AI Agent for Highly Regulated Environments
UID:SZSESSION991541
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Julien Launay\n\nInstead of relying on explicit orche
 stration\, reasoning agents plan and execute their own tool calls. Reason
 ing agents can autonomously refine their search of information across dat
 a sources\, combine API calls to execute complex actions\, and defer work
  to specialized sub-agents. This paradigm enables highly performant and r
 obust agents\, going beyond explicitly defined agentic graphs. We will re
 view how these agents can be trained end-to-end with reinforcement learni
 ng\, and showcase a few example case studies with Fortune 1000 enterprise
 s.
DTEND:20250924T153000
DTSTAMP:20260408T142705Z
DTSTART:20250924T150000
LOCATION:Junior Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Building reasoning agents with reinforcement learning
UID:SZSESSION1040039
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speakers: Guillaume Vernade\, Paige Bailey\, Patrick Loeber\n\
 nIn this session\, we'll vibe code a application from scratch. You'll see
  how our Nano Banana model and the Live API enable a rapid\, interactive 
 development workflow. This hour is all about high-impact demos and pure u
 tility\, giving you a powerful blueprint for building better and faster w
 ith the latest models from Google DeepMind
DTEND:20250924T160000
DTSTAMP:20260408T142705Z
DTSTART:20250924T150000
LOCATION:Central Room
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Build with Google AI Studio: The fastest path from prompt to produ
 ction with Gemini
UID:SZSESSION1040288
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Stephen Batifol\n\nJoin us for a conversation with Bl
 ack Forest Labs' Developer Advocate Stephen Batifol. We will dive into de
 sign innovations and decisions\, AI models and art\, and the developer co
 mmunity.
DTEND:20250924T152000
DTSTAMP:20260408T142705Z
DTSTART:20250924T150000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Fireside Chat with Black Forest Labs
UID:SZSESSION1040894
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Steeve Morin\n\nzml/attnd replaces dense attention wi
 th a sparse\,\npredictive attention algorithm that operates in\nlog-linea
 r time\, dramatically reducing the compute\nrequirements while maintainin
 g output quality. It operates on CPU over UDP and matches or even outperf
 orms GPUs in key scenarios.
DTEND:20250924T160000
DTSTAMP:20260408T142705Z
DTSTART:20250924T153000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Towards unlimited contexts: faster-than-GPU sparse logarithmic att
 ention on CPU
UID:SZSESSION1028617
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Oleg Šelajev\n\nEveryone can throw together an LLM\, 
 some MCP tools\, and a chat interface\, and get an AI assistant we could 
 only dream of a few years back. Add some “business logic” prompts\, and y
 ou get an AI workflow\; hopefully a helpful one. \nBut how do you take it
  from a local hack to a production application? Typically\, you drown in 
 privacy questions\, juggle npx commands for MCPs\, and end up debugging O
 Auth flows before it hopefully starts to make sense.\n\nIn this session\,
  we show a repeatable process for turning your local AI workflow experime
 nts into a production-ready deployment using containerized\, static confi
 gurations. \n\nWhether you prefer chat interfaces or replace them with ap
 plication UIs\, you’ll leave with solid ideas for going from a cool demo 
 to real applications without the existential dread of DevOps.\n
DTEND:20250924T160000
DTSTAMP:20260408T142705Z
DTSTART:20250924T153000
LOCATION:Founder's Cafe
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Building AI workflows: from local experiments to serving users
UID:SZSESSION977085
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Lars Trieloff\n\nRunning LLMs\, VLMs\, and voice mode
 ls on consumer hardware may sound even more intimidating than paying for 
 multiple 200-Euro subscriptions for frontier models\, but it doesn't have
  to be. We'll take a look at open models\, tools\, and infrastructure tha
 t fit onto your desk\, suitcase\, or pocket.\n\nAh. This text field is so
  small.\n\nIn this talk\, I will share my personal experiences with consu
 mer-sized LLMs\, using LM Studio\, Ollama\, MLX\, and how to tie them tog
 ether to build interesting MCPs for an audience of one\, run small models
  on even smaller machines\, and talk about taking your home lab on the ro
 ad: metaphorically through tunnels\, and literally (by doing a live demo 
 on my Mac Studio\, straight from my suitcase)\n\nListen to this talk if y
 ou've been interested in local LLMs\, but to afraid to ask what a Hugging
  Face is. Leave this talk with concrete next steps toward your own AI hom
 e lab.
DTEND:20250924T160000
DTSTAMP:20260408T142705Z
DTSTART:20250924T153000
LOCATION:Junior Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Taking your AI home lab on the road: a look at (small-ish) AI in 2
 025
UID:SZSESSION984892
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Stephane Jourdan\n\nJoin us at the Koyeb booth for a 
 talk with Anyshift\, the AISRE  for Modern DevOps Teams.
DTEND:20250924T155000
DTSTAMP:20260408T142705Z
DTSTART:20250924T153000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Presentation by Anyshift
UID:SZSESSION1040889
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:
DTEND:20250924T163000
DTSTAMP:20260408T142705Z
DTSTART:20250924T160000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Afternoon Break
UID:SZSESSIONf3b15b43-bdae-4117-8d75-a7be4d108b74
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speakers: Assaf Araki\, Floriane de Maupeou\, Thomas Turelier\
 n\nWhile flashy AI agents get the headlines\, it’s the infrastructure lay
 er that’s the most likely to determine whether AI systems scale\, adapt\,
  or fail. Data pipelines\, model optimization and deployment\, orchestrat
 ion\, observability\, non-language models... This panel offers a VC persp
 ective on this next frontier: where momentum is real\, where the stack re
 mains unfinished\, and how funding priorities are shifting.\n
DTEND:20250924T163000
DTSTAMP:20260408T142705Z
DTSTART:20250924T160500
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:VC Panel — From Hype to Hard Tech: The Future of the AI Stack
UID:SZSESSION1038069
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Tuana Çelik\n\nMost developers start with basic RAG i
 mplementations\, but production AI systems demand sophisticated multi-ste
 p reasoning\, as well as additional tooling. This talk demonstrates build
 ing production-ready agent workflows using the newly released LlamaIndex 
 Workflows 1.0. We will demonstrate how we can build powerful agents withi
 n the confines of our specific workflow design\, minimizing the risk of e
 rror or unwanted behaviour.\n\nWe'll see the evolution from a simple RAG 
 setup to a complex agents that handle tool use\, document analysis\, repo
 rt generation\, and human validation. Using Workflows 1.0's event-driven 
 architecture\, we'll build practical patterns for query planning\, memory
  persistence\, and state management—all running in real-time during the p
 resentation.\n\nThe session features NotebookLlama\, our open-source Note
 bookLM alternative\, demonstrating how Workflows 1.0 powers document-to-p
 odcast generation and multi-modal analysis in production
DTEND:20250924T170000
DTSTAMP:20260408T142705Z
DTSTART:20250924T163000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Building an open-source NotebookLM alternative
UID:SZSESSION986255
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Hervé Bredin\n\nBefore LLMs\, before Speech-To-Text\,
  speaker diarization is  the foundational layer of conversational AI pipe
 lines. Getting "who speaks when" wrong may lead to catastrophic predictio
 ns down the line. From digital meeting notetakers to AI medical scribes\,
  from AI video dubbing to podcast intelligence platforms\, knowing "who s
 aid what" is often just as important as "what was said" in a conversation
 . \n\nIn this talk\, Hervé will introduce what speaker diarization is\, w
 hat it is not\, and why this apparently simple machine learning problem h
 as yet to be solved.
DTEND:20250924T170000
DTSTAMP:20260408T142705Z
DTSTART:20250924T163000
LOCATION:Founder's Cafe
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Speaker diarization: the foundational layer of conversational AI
UID:SZSESSION1040248
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Thomas Schmidt\n\nText2SQL demos work great until rea
 l users show up. Then your agent finds 47 different customer tables\, hal
 lucinates metrics that don't exist\, and confidently tells the CEO that c
 hurn is -40% (which would be impressive if it weren't completely wrong).\
 n\nI am part of the team that builds Metabot - an AI assistant that lives
  inside Metabase to help users answer their own data questions without bo
 thering the analytics team. Simple goal\, right? Turns out teaching an AI
  to navigate real organizational data is like teaching someone to drive i
 n a city where all the street signs are wrong and half the roads don't ex
 ist on any map.\n\nThis talk is your field guide to the chaos we've encou
 ntered. We'll share the specific disasters that taught us hard lessons: w
 hy your pristine demo data means nothing\, how users will find every edge
  case you never considered\, why business rules written nowhere will brea
 k everything\, and how "temporary" tables from 2019 somehow become produc
 tion dependencies.\n\nYou'll walk away with battle-tested strategies for 
 building analytics agents that survive contact with real organizations: p
 ractical approaches to data documentation\, designing agent tools that wo
 n't spectacularly backfire\, and building guardrails that actually work w
 hen chaos strikes. This isn't a sales pitch disguised as a tech talk - it
 's a real field guide to the beautiful disaster of production AI systems.
DTEND:20250924T170000
DTSTAMP:20260408T142705Z
DTSTART:20250924T163000
LOCATION:Junior Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Everything That Can Go Wrong Building Analytics Agents (And How We
  Survived It)
UID:SZSESSION990182
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: SallyAnn DeLucia\n\nHow do we move from “my agent wor
 ks on my examples” to having confidence that it works systematically? In 
 this workshop\, we’ll walk through a structured approach to agent evaluat
 ion using Arize.\n\nParticipants will learn how to turn raw agent interac
 tions into actionable evaluations: surfacing common issues through annota
 tion\, grouping them into reusable evaluation templates\, and validating 
 them at scale. We’ll introduce Alyx to generate evaluators\, experiment w
 ith them in the playground\, and discuss how to take evaluations from off
 line design to continuous online evaluation.\n\nFinally\, we’ll explore t
 he different levels of evaluation — span\, trace\, and session — and see 
 how our agent graph provides a holistic view of agent performance. By the
  end\, you’ll leave with a practical framework and transferable skills fo
 r systematically evaluating and improving any AI agent.\n\nKey Takeaways:
 \n-A reliable framework for evaluation: Learn where to begin when evaluat
 ing agents\, moving from raw annotations to structured evaluation templat
 es and automated evaluators.\n-Building intuition for better agents: See 
 how a systematic approach to evaluation develops the intuition needed to 
 design stronger\, more reliable AI systems.\n-Arize workflows in practice
 : Experience how Arize supports this process end-to-end — from annotation
  and template creation to running experiments and monitoring in productio
 n.\n-A systems approach to evaluation: Understand why evaluating agents r
 equires more than testing single prompts — it’s about building a continuo
 us\, structured system for quality and reliability.\n\nFormat:\nInteracti
 ve demos\, guided exercises\, and live iteration on real LLM outputs. Par
 ticipants will walk away with practical techniques they can apply immedia
 tely in their own LLM projects.
DTEND:20250924T173000
DTSTAMP:20260408T142705Z
DTSTART:20250924T163000
LOCATION:Central Room
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Systematic Agent Evaluation with Arize
UID:SZSESSION1033210
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Paige Bailey\n\nThe way we create is fundamentally ch
 anging. This isn't about better chatbots\; it's about giving builders the
  power to craft new worlds\, tell new stories\, and solve problems in way
 s we couldn't before.\n\nIn this session\, we'll show you what this new e
 ra of creation looks like using Google DeepMind's latest models and hands
 -on demos in AI Studio. \n\nYou'll see Veo 3 craft cinematic video with d
 ialogue and sound from a single prompt\, watch Genie 3 spin up playable\,
  interactive worlds from a simple idea\, and witness the new multimodal r
 easoning that powers it all with Gemini 2.5 Pro. We'll even give you a li
 ve look at the Gemini 2.5 Flash Image Preview (Nano-Banana)\, a tool that
  lets you edit and fuse images with simple\, natural language. This is th
 e future\, and we’re building it in the open with you.\n\nWe'll also show
  Gemma 3\, our new family of open models engineered to run with incredibl
 e performance directly on your hardware. This puts state-of-the-art multi
 modal AI on laptops and phones\, opening a new frontier for on-device app
 lications.
DTEND:20250924T173000
DTSTAMP:20260408T142705Z
DTSTART:20250924T170000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:What's new and what's next for generative AI
UID:SZSESSION1025711
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speakers: Ekin Karabulut\, Peter Schuurman\n\nTraffic is spiki
 ng to your ML application. Your autoscaler kicks in. But instead of servi
 ng more requests\, your new replicas are stuck downloading massive model 
 weights\, loading them onto GPUs\, and warming up inference engines like 
 vLLM. Minutes pass\, response latency spikes\, making your application un
 usable. You haggle with DevOps to overprovision capacity so your applicat
 ion remains reliable. Cold starts become hot pain\, hurting latency\, dri
 ving up costs\, and making "just scale up" a lot more complicated than it
  sounds.\n\nIn this talk\, we’ll introduce a pattern for optimizing model
  loading for high performance inference. A case study\, Run:ai Model Stre
 amer\, is an open-source tool built to reduce cold start times by streami
 ng model weights directly to GPU memory in parallel. It’s natively integr
 ated with vLLM and SGLang\, supports MoE-style multi-file loading\, and s
 aturating object storage bandwidth across different cloud storage backend
 s. And all without requiring changes to your model format.\n\nWe’ll walk 
 through how Model Streamer works\, what bottlenecks it solves\, and what 
 we've learned from running it in production. Expect benchmarks\, practica
 l tips\, and best practices for making large-model inference on Kubernete
 s faster and more efficient.\n\nIf you’ve ever waited for a model to load
  and thought "surely this could be faster"\, this talk is for you!\n
DTEND:20250924T173000
DTSTAMP:20260408T142705Z
DTSTART:20250924T170000
LOCATION:Founder's Cafe
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Stop Wasting GPU Flops on Cold Starts: High Performance Inference 
 with Model Streamer
UID:SZSESSION991613
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Toma Puljak\n\nIf you’re building devtools for humans
 \, you’re building for the past. \n\nAlready a quarter of Y Combinator’s 
 latest batch used AI to write 95% or more of their code. AI agents are sc
 aling at an exponential rate and soon\, they’ll outnumber human developer
 s by orders of magnitude.\n\n\nThe real bottleneck isn’t intelligence. It
 ’s tooling. Terminals\, local machines\, and dashboards weren’t built for
  agents. They make do… until they can’t.\n\nIn this talk\, I’ll share how
  we killed the CLI at Daytona\, rebuilt our infrastructure from first pri
 nciples\, and what it takes to build devtools that agents can actually us
 e. Because in an agent-native future\, if agents can’t use your tool\, no
  one will.\n
DTEND:20250924T173000
DTSTAMP:20260408T142705Z
DTSTART:20250924T170000
LOCATION:Junior Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:AX is the only Experience that Matters
UID:SZSESSION976942
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Speaker: Neil Zeghidour\n\nNeil will describe Kyutai's open sc
 ience work on real-time voice AI: from full-duplex conversations with Mos
 hi\, to speech-to-speech translation with Hibiki and customizable voice a
 gents with Unmute.
DTEND:20250924T180000
DTSTAMP:20260408T142705Z
DTSTART:20250924T173000
LOCATION:Master Stage
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:Scaling real-time voice AI
UID:SZSESSION1036647
END:VEVENT
BEGIN:VEVENT
DESCRIPTION:Enjoy drinks and hors d'oeuvres while mingling with attendees\
 , speakers\, and sponsors.
DTEND:20250924T220500
DTSTAMP:20260408T142705Z
DTSTART:20250924T180000
LOCATION:Expo Hall
SEQUENCE:329702
STATUS:CONFIRMED
SUMMARY:After Party
UID:SZSESSION781aed19-794d-4b56-87db-6e69d1581705
END:VEVENT
END:VCALENDAR
