Session
Your AI Is Still Biased (Even After You Checked)
We often talk about bias in AI like it's something you can "fix" before launch and forget. Run a few checks, clean your data, tick some boxes, and you’re good to go—right? Not really.
In real life, AI systems keep changing after they go live. They get retrained, updated, and interact with real people in real situations. And every time we do that, there’s a chance we introduce new bias, even if we’ve done everything “right” during development. That’s why we need to stop treating bias like a one-time issue and start thinking of it as part of the whole AI lifecycle.
This talk is about how bias doesn’t just happen before deployment, it’s something that can grow during testing, retraining, and even regular use. And if we want to build fairer, more trustworthy AI, we need to look at bias as an ongoing process, not a checklist item.
I’ll also talk about why we can’t leave this work just to the AI engineers. Designers, product managers, developers, testers, legal teams, even users, all need to be part of the conversation. Because each of them sees risks, gaps, and impacts from a different angle.
We’ll look at:
- How bias creeps in after launch, even if your original model was clean
- Examples of how retraining or personalization can make things worse
- How to build "de-biasing" into your everyday work, not just at the start
- Ways to involve different teams and stakeholders at every stage
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top