Session

Bias In, Bias Out: Fixing the Root Cause of AI Inequity

AI systems are only as fair as the data they learn from. When bias creeps into data, it quietly shapes models, predictions, and decisions - often in ways that reinforce inequity. In this talk, I’ll explore how bias enters at every stage of the data lifecycle, from collection to modeling, and how it ultimately impacts AI outcomes. Using practical examples, I’ll share actionable ways to detect and reduce bias early, before it becomes embedded in analytics or algorithms.

You’ll walk away with a framework for building more equitable data and AI systems - starting at the source: the data itself.

Key takeaways:
1. Common types of bias in data and how they affect AI outcomes
2. Why fixing data bias is essential for responsible and fair AI
3. Core principles of bias detection and reduction
4. How to debias each stage of the data lifecycle to improve AI integrity

I have seen data at scale and worked on products with 400M+ users. Thus I’ve seen the impact of what bad data can do. Over the years, I’ve developed a robust framework to narrow down data issues quickly so that they can be fixed. I’ve also developed policies around checking for bias in data, that will help businesses be more thoughtful in their data solutions.

I have given variations of this talk - many of them focused on the larger concept of data quality. In just the last year 2 of those sessions were billed as keynotes. I am able to adapt this talk to a more technical overview, a more business overview, or a mix of both depending on the audience.

Shailvi Wakhlu

Data Science & Analytics Leader | "Self-Advocacy" Speaker, Author, and Consultant

San Francisco, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top