Session

What Actually Breaks When You Ship AI to Production

Deploying an LLM is the easy part. Keeping it reliable, accurate, and resilient in production is where teams get burned and most never see it coming.

This session is a practitioner's postmortem from building and operating end-to-end AI-enabled systems using GPT-4 and sentence transformers across Python and PHP stacks, from model training through API integration to server infrastructure.

What we'll cover:
- API key expiry silently taking down live features and how to design fallbacks that actually work
- Model accuracy drift in production: how to detect it before your users do
- The gap between dev behavior and 2am production behavior under real traffic
- Bridging Python ML pipelines with PHP production APIs
- What to monitor, what to automate, and what to just accept will break
- A practical pre-launch checklist for AI-integrated systems

This is real-world experience, not a framework walkthrough. No vendor demos. No slides that only work in theory.

Rajkumar Sakthivel

AI Systems Engineer | Building LLM Applications and Private Cloud at Scale | International Conference Speaker | Oxford

London, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top