Session
Detecting Bias in AI: Building Fair and Ethical Models
As AI increasingly shapes our reality, recognizing and mitigating potential biases in these technologies is crucial. Large language models (LLMs), despite being trained on vast datasets, can reflect and amplify existing societal biases present in the vast and diverse datasets used to train them.
We'll discuss how biases in training data can propagate through AI systems, potentially perpetuating societal inequalities. As Dr. Joy Buolamwini, founder of the Algorithmic Justice League, emphasizes, “When AI systems are used as the gatekeeper of opportunities, it is critical that the oversight of the design, development, and deployment of these systems reflect the communities that will be impacted by them.” [1]
While many emerging AI companies aim to ensure their technology benefits humanity, the rapid expansion of AI applications introduces new ethical challenges. This session explores practical techniques for implementing AI observability, governance, and responsible AI management aligned with industry standards.
This presentation will delve into the standard and technical practices necessary to detect and mitigate AI systems' bias. You'll learn how to:
- Identifying bias sources in AI systems
- Implementing fair AI practices throughout the development lifecycle
- Ensuring compliance with emerging AI regulations and ethics guidelines
Target Audience:
This presentation is designed for professionals across various stages of AI adoption who prioritize ethical AI implementation.
Learning Outcomes:
Upon completion of this session, attendees will be equipped to:
1. Assess and mitigate bias in AI training data
2. Identify and document performance disparities in AI models
3. Develop and implement comprehensive testing strategies for detecting unfair biases
4. Implement governance frameworks for responsible AI development and deployment
Info for the Speaker Selection Committee
[1] OpenAI’s technology is upending our everyday lives. It’s overseen exclusively by wealthy, White men
https://www.cnn.com/2023/12/15/tech/openai-board-diversity/index.html
___________________________________
Time Breakdown for 90-minute workshop
Introductions - 10 mins
Detecting examples of bias in AI - 65 mins
• Why biases are undesirable
• Evaluating model fairness
• Metrics for quantifying discrimination
• Bias mitigation metrics and toolkits
• Auditability, explainability, and accountability
Wrap up and Q&A - 15 minutes
___________________________________
Presentation History
https://speakerdeck.com/billagileinnovator
Bill Allen
Startup Co-Founder delivering Product Innovation in Financial Services | Dojo Coach / Software Crafting Coach
Chicago, Illinois, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top