Session

The Trolley Dilemma in AI: Can Machines Make Moral Decisions?

As artificial intelligence systems increasingly find themselves embedded in real-world decision-making from autonomous vehicles to predictive policing, the age-old trolley problem has found a new, urgent relevance. How do we program machines to make ethical decisions when human lives are at stake? Can an algorithm understand morality, or are we simply encoding the biases and blind spots of our own ethical frameworks?

In this session, we’ll explore how the trolley dilemma serves as a powerful lens to examine the ethical challenges of AI in the Nigerian context and beyond. Drawing from real-world applications, we’ll analyse how AI systems are being forced to make value-laden choices and what frameworks developers and policymakers can use to guide them.

Key Learning Objectives:

Understand the origins and ethical implications of the trolley problem.

Analyse its relevance in modern AI decision-making, especially in critical sectors like transportation and healthcare.

Explore the limits of programming morality into machines and how bias, data, and societal values shape outcomes.

Inspire ethical AI development within Nigeria's growing AI ecosystem to promote fairness, accountability, and sustainability.

This is a non-technical presentation aimed at AI enthusiasts, researchers, students, educators, and policymakers interested in the ethical and social impact of AI systems.

Akpaka Collins Chidindu

Product Manager, Konga Group

Lagos, Nigeria

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top