Session

Bias Isn’t in the Model — It’s in the Platform Design

We spend a lot of time debating biased models. Far less time examining biased data platforms.

In this session, I’ll argue—practically, not philosophically—that many fairness issues are baked in long before AI enters the picture. They show up in ingestion priorities, mapping decisions, metric definitions, and which questions leadership chooses to ask of the data.

Drawing from healthcare and enterprise data platform work, I’ll share concrete examples where “neutral” design choices created skewed outcomes: whose data arrived late, which populations were aggregated away, which metrics became defaults simply because they were easy to compute. No malicious intent. Just quiet bias embedded in plumbing.

This talk is designed for builders, PMs, and platform leaders—not ethicists. We’ll walk through where bias actually hides in data pipelines, why governance checklists often miss it, and how women in data roles can intervene early without turning every meeting into a values debate.

The aim is practical awareness. If you influence how data is collected, transformed, or framed, you influence outcomes—whether you claim that responsibility or not.

Key Takeaways

Where bias enters before modeling begins

Design questions that surface hidden assumptions

Lightweight checks that fit real delivery timelines

Soundbite

“Most bias isn’t coded. It’s quietly designed.”

Shanthi Sivakumar

Bridging AI Innovation and Ethical Impact—One Human-Centered Solution at a Time

Houston, Texas, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top