Session
AI vs Regex: When Is Regex Better?
AI is a tempting default when a product needs to scale automation. It is broadly applicable, and it feels safer than writing brittle rules. But general-purpose models can underperform specialized techniques on narrow, high-precision tasks, and their failure modes can be hard to spot. This talk is a case study on scoring assignments against a rubric where our AI approach did not meet expectations.
In this talk, we show how to decide when AI is necessary and when simpler techniques are more reliable, using a real-world case study of a startup that tried generative AI (GenAI), natural language processing (NLP), and machine learning (ML) approaches to solve a core problem. We’ll walk through where we used AI, where it was insufficient and why, and what we did instead, including deterministic techniques. The upside is broad coverage on messy inputs, the cost is drift, latency, and harder-to-explain errors. We apply three controls: define “good” with concrete success criteria, prioritize what failures matter based on user impact, and watch for drift when the model or data changes. For engineers and teams who automate text or code understanding, you’ll leave able to choose the right tool, evaluate tradeoffs, and debug predictable failure modes.
This talk has previously been given at CodeMash 2023, KCDC 2024, and dev up 2025.
Robert Herbig
AI Practice Lead at SEP
Indianapolis, Indiana, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top