Session
What Building an AI Product Actually Taught Me
This isn't a talk about AI-assisted coding. It's about what happens when LLM responses drive your application's behaviour; when the output gets parsed, validated, and executed by your business logic.
While building BraidFlow, I discovered that the industry's documented problems aren't solved by better models or bigger context windows. Context drift in multi-turn conversations isn't just a prompt engineering challenge, it's an architectural one. Structured output reliability doesn't need JSON mode; schema field ordering and validation-driven retry logic function as instruction. And cost optimisation isn't about getting the one-shot right with the best performing models; informed retry prompts with cheap models can work just as well.
I'll share the benchmarking data that surprised me, the architectural patterns I tried along the way, and the prompt engineering insights that actually worked. You'll see real code, real failures, and the decisions that finally worked.
No theory. No hand-waving. Just lessons from shipping features that had to work.
Ben Dechrai
Disaster Postponement Officer
Kansas City, Missouri, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top