Session
Building AI-Native Features in React
Your AI feature is a chat widget in the corner. Your users can tell.
Most React teams treat AI integration like any other API — fetch a response, render it, done. But LLM-powered features break every assumption we rely on: responses stream unpredictably, latency is measured in seconds not milliseconds, outputs are non-deterministic, and "retry" doesn't give you the same answer twice.
This session covers the production-tested architectural patterns that make AI features feel native to your application — not bolted on as an afterthought.
You'll learn:
- Streaming UI composition with Suspense boundaries and Server Components
- State architecture for conversational context, window limits, and multi-turn interactions
- Why traditional optimistic updates break for probabilistic operations (and what to use instead)
- Graceful degradation strategies: expensive model → cheaper model → no AI
- Production operations: observability, cost management, and testing non-deterministic outputs
Perfect for: React developers shipping AI-powered features, frontend architects evaluating LLM integration patterns, and anyone whose "add AI" ticket turned out to be harder than the demo suggested.
Visual walkthroughs: Real architectural patterns for streaming LLM responses, side-by-side comparisons of bolted-on vs. AI-native approaches, and progressive breakdowns from "it works in the demo" to production-ready architecture.
Walk away with: Streaming UI patterns using Suspense and Server Components, a state management strategy for conversational AI features, error boundary patterns for when AI fails or hallucinates, and a graceful degradation playbook that keeps your app useful even when the model is down.
Martin Rojas
AI Acceleration Lead at PlayOn Sports
Atlanta, Georgia, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top