Speaker

Albert Dorador

Albert Dorador

Adjunct Professor of Statistics at BarcelonaTech

Barcelona, Spain

Actions

Albert Dorador is an Adjunct Professor of Statistics (BarcelonaTech) and Mathematics (Pompeu Fabra). He holds a PhD in Statistics from the University of Wisconsin–Madison and previously served at the European Central Bank, specializing in financial risk management and machine learning applications. Albert is the creator of the TRUST and Renet algorithms among others, focusing on the intersection of high-performance optimization and auditable, "human-scale" machine learning. His work centers on solving the "Interpretability Gap" in high-stakes regulatory environments, moving the industry toward deterministic and forensic-grade AI transparency.

Area of Expertise

  • Finance & Banking
  • Government, Social Sector & Education
  • Information & Communications Technology
  • Physical & Life Sciences

Topics

  • Machine Learning and AI
  • explainable AI (XAI)
  • Machine Learning Explainability
  • Model explainability
  • Trustworthy AI

The "Flicker Effect": Why Your Model Audits Are Lying to You

Have you ever estimated feature importance in scikit-learn, changed the random_state, and watched your "Top 5" features swap places? This is the "Flicker Effect."

For most Python developers, "shuffling" data (Permutation Importance) is the industry standard for explaining models. But in high-stakes environments like banking or healthcare, stochastic results are a liability. If you can’t get the same answer twice, can you really trust the audit?

In this talk, we move beyond "random shuffling" toward Deterministic Model Auditing. We will explore:

- A Beginner-Friendly Introduction to the Math of Stability: How a "single optimal permutation" makes model explanations 100% reproducible and 30x faster.
- The Proxy Problem: How models "sneak in" biased data (like gender or race) through proxy variables, and how to detect this "signal leakage" using Systemic Variable Importance (SVI).
- Forensic-Grade AI: How to move from "black-box" guesses to audits that hold up under regulatory scrutiny.

Whether you are a data scientist building models or a developer curious about AI fairness, you will leave with a new framework for making your Python models truly accountable.

Albert Dorador

Adjunct Professor of Statistics at BarcelonaTech

Barcelona, Spain

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top