Session

How to Make Your Machine Learning Models More Interpretable and Explainable

Abstract: Machine learning models are often seen as black boxes that produce predictions without revealing the underlying logic or reasoning. This can pose challenges for trust, accountability, fairness, and debugging, especially in high-stakes domains such as healthcare, finance, or security.

In this presentation, we will introduce the concepts of interpretability and explainability in machine learning, and discuss why they are important for both developers and users of machine learning systems. We will also review some of the techniques and tools that can help make machine learning models more interpretable and explainable, such as feature importance, partial dependence plots, LIME, SHAP, and counterfactual explanations.

We will demonstrate how to apply these techniques and tools to different types of models, such as linear models, tree-based models, and deep neural networks, using examples from real-world applications. Finally, we will highlight some of the challenges and limitations of existing methods, and suggest some directions for future research and practice.

Jean Joseph

Technical Trainer/Data Engineer @Microsoft

Newark, New Jersey, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top