Session

Bias Beyond the Model: Why Equity Is Required for Fairness and Why Fairness Is a Systems Property

Most conversations about AI fairness focus on model metrics. Demographic parity. Equalized odds. Accuracy gaps.

But fairness is not a model property. It is a systems property.

Bias enters long before model training and continues long after deployment. It lives in data sourcing decisions, schema design, feature selection, thresholds, feedback loops, and human override policies. A model can appear statistically fair while the system that surrounds it produces inequitable outcomes.

In this session, we reframe fairness as an emergent architectural property. We explore why equity is required for meaningful fairness, how structural inequities become encoded into data pipelines, and why bias cannot be solved with periodic audits alone.

Attendees will leave with a systems-level mental model for evaluating fairness beyond model weights and a new framework for designing AI systems that account for real-world human impact.

Angel Ceballos

Founder and CEO @ SeraphicGuardian | Architect of Defensible Systems

Raleigh, North Carolina, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top