Session

Assessing and mitigating fairness issues in AI

Fairness of AI systems is one of the key aspects in building trust in the use of AI, and thus unlocking its great potential. Societal and technical aspects shape who might be harmed by AI systems and how. There are many complex sources of unfairness and a variety of societal and technical processes for mitigation.

In this session, a demo for credit-card default models will be shown. Fairness-related harms may arise when the model makes more mistakes for some groups of applicants compared to others. These predictions are sometimes used to automatically reject or accept an application, directly impacting both the applicant and the organization.

The use metrics and algorithms that considers the AI system's broader societal context will be explained, so data scientists can use them to improve fairness of AI systems.

Patricio Cofre

EY Partner and Microsoft MVP/RD

Santiago, Chile

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top