Session

Explainable AI

Explainable AI is a branch of AI that seeks to make Machine Learning and Deep Learning models understandable by humans. When developing data-driven models, a trade-off often arises between accuracy and interpretability, improving model’s predictive accuracy typically comes at the cost of its interpretability, and vice versa. This phenomenon, known as the “Interpretability-Accuracy Trade-Off”. Simple models such as linear regression, provide transparent decision-making but may struggle with complex problems, whereas non-linear and highly parameterized models like Artificial Neural Networks generally achieve superior predictive performance but are often too intricate for human understanding. The complexity of such systems renders them opaque, even to AI specialists, earning them the label of “black boxes” . However, these methods are applied in critical fields such as safety, health, food production, where understanding how decisions are made and understanding what is going inside these black boxes is essential. This is exactly where XAI comes into play, making models more understandable and interpretable.
Understanding how models work is important not only for AI developers but also for end users, who seek clear explanations for the decisions made. Thus, the understandability and interpretability of models can be considered an essential aspect of user experience, and a key tool to encourage adoption of AI-based system in real world applications.
To sum up, in this session I will present a range of explainable AI techniques used to analyze and interpret model decisions, making them more understandable.

Wafae Hamdaoui

Phd student

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top