Session

Technical Practices for Detecting Bias in AI: Building Fair and Ethical Models

In an era where generative Artificial Intelligence (AI) shapes realities, it's crucial to recognize the inherent biases these technologies may carry. Large language models (LLMs), developed using vast and diverse datasets, may reflect the biases inherent in the human-generated content on which they were trained.

According to Dr. Joy Buolamwini, the founder of the Algorithmic Justice League, an organization tracking the harms of artificial intelligence, AI-powered tools are “determining who gets hired, who gets medical insurance, who gets a mortgage, and even who gets a date. " Buolamwini goes on to say, “When AI systems are used as the gatekeeper of opportunities, it is critical that the oversight of the design, development, and deployment of these systems reflect the communities that will be impacted by them.” [1]

But AI should benefit everyone, right? Even OpenAI (parent company of ChatGPT) publicly states its mission is to ensure that artificial general intelligence “benefits all of humanity.” And, the opening of OpenAI's GPT Store and similar venues has ushered in a new era of AI apps, which may introduce new ethical challenges. How will we govern this technology to ensure ethical practices and benefits for all of humanity?

During this presentation, participants will gain insights into using AI Observability, AI Governance, and other key concepts to ensure responsible management of AI systems in compliance with policies and standards.

Target Audience

This talk is for anyone using Artificial Intelligence (AI), whether a curious consumer or a seasoned technologist, who shares the common goal that AI should operate ethically and fairly.

Learning Outcomes

After attending this talk, participants will be able to identify bias in AI platforms and apply these 3 key learning outcomes:
• Evaluate training data bias
• Document performance disparities
• Test models for unfair biases



Info for the Speaker Selection Committee

[1] OpenAI’s technology is upending our everyday lives. It’s overseen exclusively by wealthy, White men

https://www.cnn.com/2023/12/15/tech/openai-board-diversity/index.html

___________________________________

Time Breakdown for 75-minute talk

Introductions - 10 mins

Detecting examples of bias in AI - 50 mins
• Why biases are undesirable
• Evaluating model fairness
• Metrics for quantifying discrimination
• Bias mitigation metrics and toolkits
• Auditability, explainability, and accountability

Wrap up and Q&A - 15 minutes

___________________________________

Presentation History

https://speakerdeck.com/billagileinnovator

Bill Allen

Dojo Coach / Software Crafting Coach

Chicago, Illinois, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top