Speaker

Shanthi Sivakumar

Shanthi Sivakumar

Bridging AI Innovation and Ethical Impact—One Human-Centered Solution at a Time

Houston, Texas, United States

Actions

Shanthi S. is a digital health innovator, AI ethics advocate, and founder of FairFrame AI, a nonprofit initiative focused on mitigating algorithmic bias in healthcare, education, and career technologies. With over a decade of experience in healthcare IT and consulting for Fortune 500 companies, Shanthi brings a rare blend of clinical insight, technical strategy, and ethical foresight to the development of responsible AI systems.
Her recent work spans designing emotionally intelligent career transition tools for displaced healthcare workers, developing micro-AI agents to automate complex revenue cycle workflows, and advising hospital systems on real-time location systems (RTLS) and AI-driven decision support tools. A transformation strategist, Shanthi champions inclusive innovation—ensuring AI solutions are not only powerful but equitable.

She speaks on topics such as:

Human-centered AI design in healthcare
Real-world consequences of algorithmic bias
AI in clinical operations and workforce transformation
Ethical guardrails for enterprise AI adoption
Shanthi is a compelling voice at the intersection of empathy and intelligence—advocating for AI that uplifts, not overlooks.

Area of Expertise

  • Business & Management
  • Health & Medical
  • Information & Communications Technology

Topics

  • AI Ethics
  • AI Agents
  • AI Bias
  • AI & Machine Learning
  • Anti-Bias
  • Generative AI Use Cases
  • clinical feedback loops for AI models
  • Digital Health
  • Digital Transformation
  • AI for Startups
  • AI in Health
  • AI for Social Good
  • AI ethics across design
  • Innovation Strategy
  • Innovate for change
  • Innovation
  • Business & Innovation
  • Women in Leadership
  • Women in Tech
  • Business Leadership
  • Female Leadership
  • Data Strategy & Leadership
  • Women in Agile
  • Healthcare AI
  • healthcare
  • AI Healthcare Agents
  • Healthcare Technology
  • Digital strategy and Transformation
  • Digital Strategy
  • Digital Workplace
  • Building Digital Products
  • Agile Digital Transformation & Data Analytics
  • Digital Business Processes
  • Digitalization
  • Project Management
  • Product Management
  • product strategy
  • Agile project management
  • Project & Portfolio Management
  • Project & Portfolio
  • Product Innovation
  • Technology Innovation
  • innovation in healthcare
  • Business Model Innovation
  • Agile Transformation
  • Business Transformation

Bias Isn’t in the Model — It’s in the Platform Design

We spend a lot of time debating biased models. Far less time examining biased data platforms.

In this session, I’ll argue—practically, not philosophically—that many fairness issues are baked in long before AI enters the picture. They show up in ingestion priorities, mapping decisions, metric definitions, and which questions leadership chooses to ask of the data.

Drawing from healthcare and enterprise data platform work, I’ll share concrete examples where “neutral” design choices created skewed outcomes: whose data arrived late, which populations were aggregated away, which metrics became defaults simply because they were easy to compute. No malicious intent. Just quiet bias embedded in plumbing.

This talk is designed for builders, PMs, and platform leaders—not ethicists. We’ll walk through where bias actually hides in data pipelines, why governance checklists often miss it, and how women in data roles can intervene early without turning every meeting into a values debate.

The aim is practical awareness. If you influence how data is collected, transformed, or framed, you influence outcomes—whether you claim that responsibility or not.

Key Takeaways

Where bias enters before modeling begins

Design questions that surface hidden assumptions

Lightweight checks that fit real delivery timelines

Soundbite

“Most bias isn’t coded. It’s quietly designed.”

The Myth of “Just Learn More Tech” for Women in Data

Early in my career, I believed the advice everyone gives women in data: learn one more tool, one more language, one more platform. So I did. SQL, analytics, data pipelines, cloud, healthcare data, enterprise platforms. The work kept getting harder. The recognition didn’t scale with it.

This session is a reality check not anti-learning, not anti-tech, but honest. Many women in data are already technically capable. The real blockers are decision rights, narrative control, and who gets trusted when ambiguity shows up. Yet the default advice remains: “upskill more.”

I’ll walk through real delivery scenarios from large data programs where women carried the technical and operational load but were still positioned as “support,” while less technical voices shaped the story and decisions. We’ll unpack why skill-stacking alone doesn’t fix visibility gaps, how over-competence can actually trap women in execution roles, and when “learning more” is avoidance by leaders and by us.

This isn’t a motivational talk. It’s a practical reframing. We’ll discuss where additional skills do matter, where they don’t, and what actually shifts influence in data organizations: framing problems, owning risk conversations, and deciding when to stop proving and start positioning.

Key Takeaways:

When more skills help and when they quietly hurt

How to recognize when you’re over-investing in execution

Practical ways to move from “reliable expert” to decision influencer

Soundbite

“If learning more tech was the answer, most women in data would already be running the room.”

Invisible Labor in Data Programs: Why Women Carry the Real Load

Every large data program has two tracks: the official plan and the work that actually keeps it from falling apart. The second one rarely shows up in JIRA.

This session is about that invisible layer—stakeholder translation, conflict smoothing, risk anticipation, follow-ups, context retention. In my experience, women disproportionately carry this load, especially in complex data and platform programs where ambiguity is constant and accountability is diffuse.

I’ll share real examples from enterprise data initiatives where delivery succeeded not because the architecture was perfect, but because someone quietly connected dots others didn’t want to own. The work wasn’t assigned. It wasn’t rewarded. But without it, the program would have stalled.

We’ll talk honestly about why women step into this gap—sometimes by instinct, sometimes by expectation—and how it slowly erodes energy, credibility, and career momentum if left unnamed. This isn’t about blaming teams. It’s about recognizing a structural pattern and learning how to surface, scope, and negotiate this work without being labeled “difficult” or “not collaborative.”

The goal is not to stop caring. It’s to stop carrying everything silently.

Key Takeaways

How to identify invisible labor before it consumes you

Language to make this work explicit and bounded

Tactics to redistribute load without burning trust

Soundbite

“If your data program feels ‘smooth,’ someone is absorbing the friction—and it’s usually a woman.”

You Don’t Lack Skills You Are Carrying Too Much

For a long time I believed what many women in tech are told. If you feel behind learn more. If you feel unsure get another certification. If you feel invisible work harder and stay quieter.

So I did all of that.

I worked in complex data and healthcare technology programs where the work never really ends. I was often the only woman in technical rooms. The one translating between teams. The one calming tensions. The one making sure things did not break even when no one asked me to do it. On paper I was doing well. Inside I was constantly tired and quietly questioning myself.

When I became a mother the questions became louder. Not about my ability but about whether I still belonged in spaces that rewarded availability more than impact. At the same time I was entering new domains learning new technologies starting over again in many ways while carrying years of experience that did not always get recognized.

This episode is a reflection on that journey. The emotional labor behind big projects. The pressure to be dependable but not demanding. The vulnerability of starting again and again. And the moment I realized that the problem was not a lack of skills but the amount of invisible weight women carry in tech.

I will also share what pushed me to create FairFrame AI. Not confidence. Discomfort. A need to build something that acknowledged bias burnout and the human cost of technology.

My hope is that listeners feel seen. That they trust their lived experience. And that they leave believing that who they are and not just what they know belongs in the future of tech.

What listeners will take away

Relief in realizing they are not failing they are overburdened

Words to name emotional and invisible work

Courage to value their experience not just their output

Soundbite

Most women in tech are not behind. They are holding everything together.

The Bias You Can’t See: Why Polished AI Interfaces Are the New Blind Spot

When most people think of AI bias, they picture messy datasets or flawed algorithms. But in 2025, the most dangerous bias isn’t in the math , it’s in the presentation. It’s the elegant dashboard that displays biased results as “high confidence.” It’s the heatmap that looks like it’s telling the full story but hides the disparities under layers of visual gloss.

I learned this the hard way. While working on an AI tool in a hospital system, every accuracy metric looked perfect, the model performed well in validation tests. But when rolled out to real patients, certain groups received less accurate diagnoses. The cause wasn’t poor data quality or algorithm drift. It was the interface. The way the results were framed and color-coded subtly discouraged clinicians from questioning the output. The bias wasn’t shouting from the code , it was whispering in the design.

That moment led me to found FairFrame AI, a nonprofit dedicated to making sure AI works for all people, not just the statistical majority.

In this talk, I’ll show the hidden ways UI and UX choices can distort trust in AI systems. Through case studies in healthcare, finance, and hiring, I’ll unpack the cognitive shortcuts and interpretation traps that happen when design meets data science. I’ll also share my Bias Detection Playbook, which goes beyond dataset audits to include:

Interface bias audits — testing the “trust signals” your design sends

Real-user bias drills — understanding how humans interpret AI outputs differently across groups

Live post-deployment monitoring — catching bias drift that hides behind a clean UI

Why it matters now: In 2025, AI tools are being rolled out at unprecedented speed, often with interface polish as the selling point. If we don’t train ourselves to look past the aesthetics, we risk embedding systemic inequities into “trustworthy” tools that are anything but.

By the end of this session, attendees will know exactly how to look beneath the surface of any AI tool — and ensure that what’s beautiful is also fair.

Soundbite: “AI bias doesn’t always shout in code — sometimes it whispers in design.”

Bias at Scale: How Tiny AI Errors Become Global Inequities

In AI, a “small error” is never small. A model that’s just 2% less accurate for one demographic might seem insignificant in the lab. But once deployed to millions of users, that gap can turn into systemic exclusion — and at scale, it becomes a global inequity machine.

I’ve seen this across industries. In healthcare, slightly underestimating risk scores for certain populations meant thousands missed out on preventative care. In finance, a fractional difference in loan approval rates translated to billions in lost opportunity for entire communities. The scary part? These systems all passed internal reviews before launch. The bias was subtle enough to hide until scale amplified it.

In this talk, I’ll break down exactly how bias scales in the wild, and how organizations can build early-warning systems to detect it before it snowballs. We’ll explore:

“Scaling risk” metrics — spotting bias that looks harmless in testing but dangerous in deployment

Bias drift detection across geographies, demographics, and time

Governance frameworks for global AI deployments that adapt to local equity needs

Why it matters now: In 2025, AI is scaling faster than regulation can adapt. The organizations that learn to measure and manage bias at scale will not only protect their users but will also avoid massive brand, legal, and social fallout.

Attendees will leave with a clear, actionable framework for stopping small errors from becoming large-scale harm.

Soundbite: “In AI, bias doesn’t just scale — it snowballs.”

Automation Without Abandonment: Designing AI That Builds Careers, Not Breaks Them

Automation is coming for millions of jobs. But the real threat isn’t just replacement — it’s exclusion. AI-powered recruitment systems and career guidance tools promise efficiency, but too often they quietly decide who even gets a chance to compete for the jobs of tomorrow.

I saw this firsthand while developing LUMena AI, a career copilot for healthcare workers whose roles were at risk of automation. During testing, I reviewed competitor platforms that “helped” users find new career paths. The recommendations were polished, data-driven — and biased. Women were often nudged toward lower-paying care roles, minority candidates toward less technical fields. The algorithms weren’t malicious; they were simply reflecting and amplifying historical bias. But the result was digital redlining: invisible, scaled, and self-reinforcing.

In this session, I’ll share how we turned AI from a gatekeeper into a bridge — using bias-aware algorithms to:

Surface hidden transferable skills that workers didn’t know could open new career paths

Map equitable job transition pathways across industries

Personalize upskilling recommendations to unlock opportunities, not reinforce ceilings

We’ll dive into practical methods for auditing workforce AI, designing for inclusion, and creating AI tools that help humans rise. I’ll show how these principles have been applied not only in healthcare, but in finance, education, and manufacturing, proving they’re industry-agnostic.

Why it matters now: In 2025, every major enterprise is exploring AI-driven HR tech. The ones that get equity right will win the talent war — not just in skill, but in trust. Those that don’t risk automating inequality at scale.

By the end of this session, leaders and builders will have a blueprint for AI that’s both future-focused and human-centered.

Soundbite: “AI can’t replace humans if it’s busy helping them rise.”

Shanthi Sivakumar

Bridging AI Innovation and Ethical Impact—One Human-Centered Solution at a Time

Houston, Texas, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top