Session
Decoding the Algorithm: Why Explainability and Transparency Matter when Building AI-Driven Systems
SUMMARY
Imagine a law enforcement officer deciding whether to deploy a SWAT team or an oncologist relying on an algorithm to shape a cancer treatment plan. The stakes are high, and the need for informed decisions, corrections, and human intervention is paramount.
In a world increasingly shaped by AI, this talk delves into the pivotal significance of transparency, explainability, and human involvement in algorithmic decision-making.
Drawing from a diverse set of case studies, both from my own experiences and the industry, I'll illustrate why these aspects matter and explore practical solutions to ensuring that the design of AI-driven delivers all the benefits we expect but remains conscientious and minimises the risk of detrimental consequences.
DESCRIPTION
I start my talk with a range of examples in law enforcement, FinTec, EdTech and healthcare (some from my own experience, others based on industry case studies) to demonstrate the problems that can arise when we apply algorithms (AI or not) without consideration of their impact and context, when we just apply black box algorithms and ignore the need for transparency, explainability and human involvement.
I'll then argue for the ethical and legal imperative of incorporating transparency, explainability, and human intervention in AI driven systems. Highlighting evolving legal requirements, I'll underscore that these considerations are not just ethical obligations but increasingly becoming legal mandates across jurisdictions. We'll explore key compliance requirements that creators and designers of algorithmically powered systems need to navigate.
Beyond the hype around AI, I'll stress that these considerations extend even to seemingly mundane algorithms like a 'boring' regression.
Concluding the talk, I'll provide actionable recommendations on integrating transparency, explainability, and human intervention into the core of these systems and the system development lifecycle. I'll touch on best practices, challenges, and strategies to overcome hurdles on the path to responsible AI implementation.
TAKEAWAY
Participants of my talk will come away with
An understanding of why transparency, explainability, and human involvement are critical in AI applications
Strategies for embedding these values into team and organisational culture
Practical strategies for designing systems that not only comply with legal standards but contribute to a more responsible, humane and ethical future.
Marcel Britsch
Digital consultant, product manager and business analyst
London, United Kingdom
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top