Session
What Actually Breaks When You Ship AI to Production
Everyone on your team is excited to ship AI features. Nobody talks about what happens the week after.
This is a practitioner's postmortem from building and running end-to-end AI-enabled systems from training sentence transformer models to integrating GPT-(4,5) APIs, across Python and PHP stacks, all the way down to the server. No theory. No vendor demos. Just what broke, why it broke, and what we'd do differently.
What we'll cover:
- OpenAI API key expiry silently taking down live features and how to design fallbacks your whole team can reason about
- Model accuracy drift in production: how to catch it before your users file a bug report
- The gap between "it works in dev" and "it works at 2am under real traffic"
- Bridging Python ML pipelines with PHP production APIs without losing your mind
- What to monitor, what to automate, and what to just accept will occasionally break
This talk is for developers, testers, and anyone involved in shipping software that has an AI component. You don't need an ML background you need to understand what questions to ask before you go live.
Rajkumar Sakthivel
AI Systems Engineer | Building LLM Applications and Private Cloud at Scale | International Conference Speaker | Oxford
London, United Kingdom
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top