Session
The Algorithm That Nearly Killed Me: When Testing Isn't Enough
When my new insulin pump’s algorithm confidently delivered what it claimed were hefty doses while my blood sugar soared toward dangerous levels, I faced a terrifying reality: the software was learning, but learning the wrong things. And it was utterly convinced it was doing a great job.
This isn’t just a medical device story - it’s a wake-up call for our entire industry. As ML and AI become critical infrastructure, they’re exposing fundamental flaws in how we approach quality. We test implementation rather than behaviour, write tests that validate our assumptions rather than challenge them and mistake comprehensive coverage for genuine safety.
Drawing from 15+ years as a software engineering trainer and consultant across multiple industries, I’ll explore how traditional testing blind spots become genuinely dangerous when systems learn and adapt. We’ll examine why teams confuse feeling safe with being safe, how AI amplifies our existing quality gaps and what it really means to test systems that make decisions affecting real lives.
You’ll leave with practical approaches to testing that focuses on impact over metrics, strategies for validating both deterministic and learning systems, and the essential questions every team should ask when software moves beyond just processing data to making decisions that matter.
Because whether it’s a payment system, a recommendation algorithm or life-critical medical software, our users are trusting us to get it right.
Andrew Paul
Senior Software Trainer at Instil, with a passion for education over training!
Limavady, United Kingdom
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top