Hernan Huwyler
AI Risk & Compliance Leader| Academic Director IE | Decision-Making | AI Threats and Controls | Responsible Technology | Quantitative Risk | Compliance
Copenhagen, Denmark
Actions
Hernan Huwyler is a Global Risk Management Executive and the Academic Director at IE Law and Business School (Executive Education). With a dual background as an MBA and CPA, Hernan specializes in the intersection of Law, Technology, and Quantitative Risk.
He focuses on "translating" complex regulatory frameworks (EU AI Act, GDPR, ISO 42001) into actionable architectures for engineering and business teams. Unlike traditional compliance officers, Hernan advocates for data-driven risk management, moving beyond "heatmaps" to utilize Monte Carlo simulations, convolution models, and Python-based risk quantification.
Hernan has led Risk, Control, and Audit functions for major multinational corporations across Europe and the Americas. He writes and speaks extensively on Adversarial AI, Algorithmic Auditing, and the financial quantification of Cyber Risk. Past keynote venues include Risk Awareness Conferences (>2,000 risk managers), IIA and ISACA Summits, ProcureCom Europe, IKN Madrid, IE University, and closed-door sessions for international organizations and national regulators. Hernan’s talks consistently rank in the top 5% of conference feedback scores because he makes complex security requirements, business needs and regulations feel like a competitive advantage.
Speaking Topics:
AI Governance: Implementing ISO 42001 and navigating the EU AI Act.
Risk Quantification: Why "Impact x Probability" fails and how to use convolution models.
Adversarial AI: Balancing transparency with security to prevent data poisoning.
AI ROI: Risk-adjusted financial modeling for Generative AI deployments.
Ready-to-Submit Talk Proposals:
Session 1: The Technical/Quant Talk
Title: Kill the Heatmap: Quantifying AI Risk with Python & Monte Carlo Simulations
Abstract: Most organizations manage AI risk using colorful 5x5 "heatmaps." This is mathematically flawed and dangerous for high-stakes AI deployment. In this talk, we will ditch the gut-check methodology. I will demonstrate how to use convolution and Monte Carlo simulations to model the "tail risk" of AI hallucinations and regulatory fines. You will leave with a method to calculate the actual financial exposure of your models—no expensive GRC software required, just math and logic.
Session 2: The Governance/Security Talk
Title: The Transparency Trap: How to Audit AI Without Creating Vulnerabilities
Abstract: New standards like ISO/IEC 12792 demand AI transparency, but in the age of Adversarial Machine Learning, "oversharing" is a security vector. If you reveal your full architecture and training sources, you invite data poisoning and model inversion attacks. This session outlines a "Tiered Disclosure Architecture." We will discuss how to route specific data to auditors, regulators, and users differently, ensuring compliance with the EU AI Act without handing the keys to bad actors.
Session 3: The Business/Strategy Talk
Title: The Real Cost of Intelligence: A Risk-Adjusted Framework for AI ROI
Abstract: Companies are deploying Generative AI based on productivity hype, but they are ignoring the "OpEx Iceberg"—model drift, inference costs, and liability reserves. This talk presents a financial framework for calculating the true ROI of AI. We will cover how to factor in the "Risk Delta" (new risks vs. automated risks), calculate the Levelized Cost of AI (LCOAI), and determine when to build vs. buy based on process uniqueness. A session for leaders who need to justify the cloud bill to the CFO.
Area of Expertise
Topics
From AI Hype to AI Accountability: Risk, Assessments and Controls for Responsible Adoption
Organizations are deploying AI into production systems that affect credit decisions, hiring, fraud detection, and customer outcomes. Most of them have a responsible AI policy. Very few have controls that a regulator, an auditor, or a plaintiff's attorney would accept as evidence.
This session closes the gap between what an AI governance policy says and what systems can actually prove. In 30 minutes, I will walk through the three failures I see most often in AI risk programs, superficial risk assessments that cannot be audited, ethical principles that cannot be enforced technically, and ownership structures where nobody is actually accountable and show participants what to build instead.
Participants will see how to replace subjective analysis with quantitative risk models that translate bias exposure, model drift, and regulatory penalty scenarios into financial terms a board can act on. They will learn why system-level controls, middleware that validates every output against policy rules before it reaches the user, are more defensible than model-level alignment that jailbreak research consistently shows can be bypassed.
This is not a session about principles. It is a session about controls, numbers, and evidence. If participants are responsible for AI systems in production and you cannot answer five questions , what is the metric, what is the threshold, who owns it, how often is it tested, and what happens when it is breached, this session will show you how to get there.
Participants will leave withpractical insights they can apply on Monday morning, whether you are a CAIO building a governance program, a security architect designing AI threat models, or an engineering leader trying to figure out what the EU AI Act actually requires from your deployment pipeline.
Hernan Huwyler
AI Risk & Compliance Leader| Academic Director IE | Decision-Making | AI Threats and Controls | Responsible Technology | Quantitative Risk | Compliance
Copenhagen, Denmark
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top