Speaker

Iskandar Setiadi

Iskandar Setiadi

Software Engineer at HENNGE, K.K.

Tokyo, Japan

I am an Indonesian software engineer working for HENNGE, K.K. in Tokyo, Japan since 2015.

Area of Expertise

  • Information & Communications Technology

Understanding Machine Learning Interpretability with Anime Characters Recognition

In the past, I challenged myself to learn machine learning in my spare time. At that time, I started building an application to recognize anime characters based on uploaded images. However, machine learning model often behaves unexpectedly as it doesn’t necessarily follow human’s thinking model. For example, adversarial attacks show us how we can break neural network models in a way that we don’t expect. A single question remains, what should we do to determine the next step in improving our model?

In the old time, we simply tried to record accuracy, precision, recall, F1 values of our model and started adjusting our hyperparameters or feeding more data into our model. Nevertheless, we often don’t know what goes wrong or what goes well. With various interpretable machine learning techniques, we can narrow down our problems into explanations that can be easily understood by humans. In a classic husky vs wolf image recognition problem, husky is classified as a wolf because the model correlates snow in the background with a wolf. Going back to the application that I tried to build above, what is the most important feature of an anime character image? How can I improve my model performance?

In this talk, we will explore interpretability of machine learning models with an interesting study case!

---

Outline and timeline are as follows.

- Introduction (3 minutes)
- Self introduction
- Why do I use Python & interested in machine learning
- Current art-of-state performance in image recognition

- Deep Learning and Transfer Learning for Anime Characters Recognition (5 minutes)
- Why existing model cannot be used, comparing results between human face and anime character face
- What is transfer learning and its benefits for small training dataset
- Train model for anime character recognition with deep learning Inception v3
- Showcase

- Interpretable Machine Learning (8 minutes)
- A classic husky vs wolf showcase
- Interpretable machine learning and its importance
- Linear regression (simplest one)
- Model-agnostic methods

- Interpretable Machine Learning x Anime Characters Recognition (6 minutes)
- Similarity visualization between anime characters (expectation)
- Local Surrogate (LIME) with anime characters recognition
- Expectation vs reality

- Conclusion (3 minutes)
- Interpretability and model improvement
- Future of interpretability

- Questions and answers (5 minutes)

Iskandar Setiadi

Software Engineer at HENNGE, K.K.

Tokyo, Japan