Session

Fight AI bias with… bias

How can we ensure that AI systems treat everyone fairly?

Human biases influence the outputs of an AI model. AI amplifies bias, and socio-technical harms impact fairness, adoption, safety, and well-being.
Harms such as improperly implemented AI-driven predictive policing disproportionately affect legally protected classes of individuals and groups in the United States.

In this talk, Noble walks through tools/libraries to mitigate bias within your ML pipeline and explainability solutions with SHAP for explaining the predictions of machine learning models to avoid disproportionate product failure due to fairness and bias issues.

It’s so fitting that the 2022 theme for International Women’s Day was #BreakTheBias, so join Noble as he returns to Strangeloop to expand on the topic of bias, deconstruct techniques to de-bias datasets by example for building intelligent systems that are fair and equitable while increasing trust and adoption.

Tags: Explainability, XAI, Fairness Metrics, ML Evaluation, Model Monitoring, Interpretable Models, Machine Learning, ML/AI

Noble Ackerson

Responsible AI Product Strategy & Data Governance

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top