Session
Architecting Autonomy: Building and Testing Agentic LLM Workflows
Agentic AI systems promise to go beyond simple prompts and responses—designing workflows where large language models act, adapt, and deliver. But how do we actually build them with reliability in mind?
In this session, we’ll walk through the foundations of architecting agentic workflows: from understanding Model Context Protocol (MCP) and AI tooling, to choosing workflow orchestration tools and weighing their trade-offs. We’ll explore strategies for testing—both deterministic tests where possible, and flexible testing frameworks where unpredictability is the rule. Finally, we’ll dive into how these tests guide iteration toward robust, production-ready agentic systems.
Grounded in real-world experience building AI at a rapidly growing tech company, this talk is designed as an introduction for engineers and curious builders alike. No coding is required, but attendees will leave with a clear mental model of how agentic workflows are structured, tested, and improved—and a practical toolkit to begin designing systems that build and iterate reliably, all on their own.
Intended as 1 hour session, can be adapted
Tela Wittig
Full Stack Engineer at Moonshot AI
New York City, New York, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top