Session

Interpretability of Machine Learning Models using Python and Microsoft InterpretML

Even today it is quite common to meet clients or colleagues who perceive Machine Learning models as a black box capable of performing magic or alchemy to give the customer what they want. However, the stark reality is that without a reasonable understanding of how ML models work, real-world projects are rarely successful. Considering that any project of this style has a very relevant business component and that the ultimate goal of these models is to make decisions automatically by users, they have the right to ask questions such as "How can I trust this model?" or "How do you make the model decisions?" Answering these questions can become challenging and is one of the areas where more efforts are being invested today.

In this session, we´ll walk through different interpretability strategies using Python and packages such as Microsoft Interpret ML

Antonio José Soto Rodriguez

Director Verne Tech

A Coruña, Spain

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top