Session
Growing AI Projects: Where Science Meets Engineering
Over 90% of AI projects failed to deliver business value. It’s easy to spin up an LLM prototype that “looks smart,” but developing it into a product people trust is much harder. At Chattermill, we’ve learned that the real challenge isn’t just scaling infrastructure or fine-tuning models - it’s reconciling two very different cultures: backend systems that optimise for determinism and cost, and data science workflows that wrestle with uncertainty and meaning.
Over the past decade, we’ve learned how to align these worlds. Along the way, we uncovered patterns that make the difference:
* Observability must evolve. Engineers monitor retries and latency, while data scientists track semantic drift, hallucinations, and embedding overlap. Tooling needs to surface both.
* Two definitions of “done”: For engineering, it’s “correct and reliable.” For data science, it’s “useful and meaningful.” Projects succeed only when both are aligned.
* The last 10% matters differently: Cutting costs often sacrifices semantic fidelity. Quality isn’t about accuracy but also about preserving meaning.
* Shared ownership in LLMs-as-APIs: Backend owns scalability, spend tracking, and reliability; data science owns semantic quality and trust. Together, this split builds robustness.
We’ll also include a scary tale of a retry that cost us hundreds of dollars - and more than a few puzzled faces.
You’ll leave with practical ideas for bridging backend and data science: feedback loops that respect determinism and meaning, cost practices that catch drift early, and engineering patterns that make semantic quality observable - not invisible.
Maciej Rząsa
Senior Software Engineer at Chattermill
Rzeszów, Poland
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top