Speaker

Seppe Housen

Seppe Housen

Getting organizations ready for responsible AI

Mechelen, Belgium

Actions

Hi, I’m Seppe!

I work on Responsible AI at the intersection of Artificial Intelligence (AI), risk management, and governance, with a clear mission: AI that benefits everyone.

With over five years of experience in AI, including the last three years focused specifically on Responsible AI, I specialize in helping organizations unlock the value of AI while proactively managing its risks. I combine hands-on technical expertise with a strong governance mindset, enabling me to translate complex regulatory and ethical requirements into concrete, actionable guidance for both technical and business stakeholders.

I play a central role in developing and scaling Datashift’s Responsible AI offering. Achievements I am particularly proud of include:
- Implementing Belgium’s first successful AI risk management framework for AI front-runners
- Acting as a trusted advisor to more than 15 strategic clients on Responsible AI
- Designing an internal screening process to assess the risks of all AI systems developed by Datashift
- Working on real-world AI systems ranging from agentic AI and customer-facing chatbots to healthcare AI, government evaluation systems, and pricing models in banking

What once felt like a “weakness” - an atypical background - has become my greatest strength. I combine hands-on technical experience in AI with an academic background in Criminology, Management, and Applied Economics, allowing me to connect societal impact, business value, and technical robustness. This unique perspective enables me to bridge the needs of executives, risk managers, and AI practitioners.

I began my speaking journey through Datashift’s internal knowledge-sharing sessions, and quickly expanded to community meet-ups, webinars, conferences, and interactive workshops, engaging audiences ranging from beginners to seasoned experts.

I am passionate about sharing insights on Responsible AI because AI is too valuable to ignore, yet too risky to approach carelessly.

Outside of Responsible AI, you’ll most likely find me in the gym - training my lifts and preparing for my next competition, such as the European Weightlifting Championships.

Area of Expertise

  • Business & Management
  • Finance & Banking
  • Information & Communications Technology

Topics

  • Responsible AI
  • AI Risk Management
  • Trustworthy AI
  • AI Ethics and Regulatory Standards
  • Artificial Intelligence and machine learning

Introduction to Responsible AI: Balancing Value and Risk

AI is changing the world we live in. From chatbots to breakthroughs in science, mathematics, and business, today’s innovations are increasingly built on AI technology. The message is clear: organizations cannot afford not to adopt AI.

At the same time, we see a growing number of risks emerging. Some are almost comical - chatbots making silly mistakes - but with very real consequences for the organizations operating them. Others are far more serious, such as bias in pricing and credit models, misuse of personal data, or, at the extreme end of the spectrum, long-term existential risks as AI capabilities continue to grow.

So how do we move forward?

This session introduces Responsible AI as a practical way to balance AI’s enormous value with its equally significant risks.

In this talk, I will cover:
- Why Responsible AI matters - and why it is more important today than ever before

- The key components and processes organizations need to govern AI and manage its risks

- The broad group of stakeholders involved, from data & AI teams and business leaders to executives and control functions

- Concrete, real-world examples of how AI risks can be identified, managed, and monitored in practice

The session is designed to be accessible for newcomers, while still offering structure and insights that resonate with more experienced practitioners.

This session has been refined through internal knowledge-sharing sessions and partially delivered at external conferences to diverse audiences with limited prior exposure to Responsible AI.

The ideal session length is 30–60 minutes and can be extended with interactive elements, such as a hands-on AI risk management exercise for small groups (10–20 participants).

Language: I am comfortable delivering this session in both English (EN) and Dutch (NL).

AI - What it Will Take to Make the Future "Good"

In recent years, impressive AI systems have captured global attention. From breakthroughs in science to applications in business, AI is no longer a distant possibility, but already shaping our world. As the technology grows, so do the stakes. How can we ensure AI delivers positive impact rather than unintended harm?

In this keynote, I explore the future of AI through three guiding questions, blending concrete examples with a discussion of the opportunities and risks ahead.

Is AI truly going to change our world?
Critics often argue that AI is “just predicting the next word,” that it doesn’t truly understand concepts, and that current breakthroughs are limited to narrow, single-domain applications. Believers, on the other hand, point to impressive successes such as AlphaFold solving protein structures, AI winning gold medals in mathematics Olympiads, and the widespread adoption of chatbots that democratize access to knowledge and services. Looking forward, frontier AI promises humanoid robots, world models, automatic coding systems, and even the long-discussed prospect of artificial superintelligence. The real debate is not whether AI will change the world, but how profoundly and in what ways.

Will the impact of AI be positive or negative?
Already, AI is creating tangible benefits across science, healthcare, finance, and business. Yet risks are equally real. Organizations face challenges such as biased pricing models, chatbots that hallucinate, and AI systems making poor operational decisions. Looking ahead, frontier AI raises concerns ranging from cybersecurity threats to nuclear or biochemical risks. And in the long-term, the rise of artificial superintelligence introduces existential considerations that require careful thought and planning.

What is needed to maximize positive impact?
Ensuring AI benefits society requires action on multiple fronts. We need to research and manage risks across three levels: immediate but smaller organizational risks, near-future frontier risks, and long-term existential risks. Collaborate and bridge industry, academia, and policy to foster knowledge sharing. And finally, we need to break the “AI bubble” in which practitioners often operate: many people, outside the sector of AI, have little awareness of AI’s potential and pitfalls. Broad engagement and communication are key to creating a future where AI serves the collective good.

This keynote combines real-world examples, lively debate, and strategic insight, giving executives a clear view of the opportunities, risks, and steps needed to guide AI toward a positive impact.

Format: Typically 45–60 minutes, with optional extended Q&A or interactive discussion.

Target audience: Executives, senior leaders, policymakers, and industry stakeholders.

Language: Delivered in English (EN) or Dutch (NL).

Seppe Housen

Getting organizations ready for responsible AI

Mechelen, Belgium

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top