Session
Is Your AI Trustworthy?
As Artificial Intelligence (AI) is increasingly integrated into every aspect of our lives, the question of trust in AI systems has never been more important. This presentation, dives into the complexities of building, maintaining and evaluating trust in AI technologies. We explore the key dimensions essential for trustworthy AI: ethics, reliability, privacy, explainability, fairness, and accountability, each playing a critical role in how we perceive and interact with AI systems.
This presentation aims to foster a deeper understanding of what it takes to make AI systems trustworthy and aligned with human values and societal norms.
Target Audience: AI professionals, developers, ethicists, and anyone interested in the ethical, technical, and social implications of AI.
Duration: 2 hours with time for questions
Attendees outcomes:
- The multiple dimensions of Trustworthy AI.
- Ethical theories that help us differentiate right from wrong, good from evil, and virtue from vice.
- While Accuracy is a primary focus for AI developers, it alone is insufficient in today’s complex AI landscape.
- Privacy protection covers multiple ways users' privacy can be safeguarded.
- Explainability -- the ability to understand the internal workings and rationale behind AI predictions -- is increasingly important as AI systems become ever more intricate and opaque.
- Fairness and bias scrutinizes how AI systems treat different sub-populations, ensuring equitable treatment across various demographics, including race, gender, age and others.
- Accountability addresses the consequences and responsibilities when AI systems err or hallucinate.
Scott Gerard
Founder and Chief AI Officer at Knowledge Reactor
Raleigh, North Carolina, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top