

Dr. Neda Maria Kaizumi
Head of AI at Advania Norway
Oslo, Norway
Actions
Dr. Neda Maria Kaizumi, A Norwegian citizen who is a distinguished technology expert at the intersection of AI development, global compliance, and international regulations. With a doctorate in Organizational Behaviour. She combines a rich academic foundation with practical insights. She has further honed her expertise through the Business Analytics and Data Science executive program at Harvard Business School.
Dr. Kaizumi is ISO standards committee member, and she shapes AI governance and compliance frameworks, advocating ethical AI aligned with global standards like the EU AI Act and beyond.
An accomplished author, public speaker, and advocate for ethical AI development, Dr. Kaizumi integrates her technical acumen and business expertise to empower organizations navigating the complex global landscape of AI governance. With a deep understanding of regulatory frameworks and their impact on technology, Dr. Kaizumi is a leading voice in fostering innovation that aligns with compliance and sustainability.
Area of Expertise
Topics
Your AI Is Still Biased (Even After You Checked)
We often talk about bias in AI like it's something you can "fix" before launch and forget. Run a few checks, clean your data, tick some boxes, and you’re good to go—right? Not really.
In real life, AI systems keep changing after they go live. They get retrained, updated, and interact with real people in real situations. And every time we do that, there’s a chance we introduce new bias, even if we’ve done everything “right” during development. That’s why we need to stop treating bias like a one-time issue and start thinking of it as part of the whole AI lifecycle.
This talk is about how bias doesn’t just happen before deployment, it’s something that can grow during testing, retraining, and even regular use. And if we want to build fairer, more trustworthy AI, we need to look at bias as an ongoing process, not a checklist item.
I’ll also talk about why we can’t leave this work just to the AI engineers. Designers, product managers, developers, testers, legal teams, even users, all need to be part of the conversation. Because each of them sees risks, gaps, and impacts from a different angle.
We’ll look at:
- How bias creeps in after launch, even if your original model was clean
- Examples of how retraining or personalization can make things worse
- How to build "de-biasing" into your everyday work, not just at the start
- Ways to involve different teams and stakeholders at every stage
Securing the Future: AI-Powered Compliance Resilience
As cyber threats grow more sophisticated, compliance is no longer a checkbox but a cornerstone of security. This session delves into how AI and data analytics empower organizations to stay ahead of evolving security and compliance requirements. By harnessing AI’s capabilities, companies can automate threat detection, streamline compliance reporting, and improve resilience against breaches. Learn how to navigate the intersection of cutting-edge technology and regulatory expectations to create a robust, secure, and future-proof organization.
Secure Architecture: Offensive Security Tactics and Regulatory Alignment for IT experts
In this session, we will explore how enterprise architects can proactively design secure, scalable systems that not only withstand current threats but also anticipate future risks. By integrating offensive security tactics with regulatory requirements from the outset, architects can build a resilient framework that supports the entire organization. We'll cover how to weave compliance into the design process to mitigate potential vulnerabilities and ensure alignment with evolving regulations. Attendees will gain actionable insights into how offensive security strategies can be leveraged to reinforce secure architecture, all while maintaining a strong regulatory posture, ensuring that their designs are both robust and compliant across the enterprise.
Navigating the Divide: Lessons from Europe’s DORA and GDPR for the US Compliance Landscape
As Europe pioneers comprehensive regulatory frameworks like GDPR and DORA, the US struggles with fragmented and sector-specific compliance standards. This talk explores how the US can draw from Europe's cohesive approach to data protection and operational resilience. We’ll compare the strengths and gaps of both systems, uncovering actionable strategies for aligning ethical AI, compliance resilience, and global security demands. Attendees will gain insights into adopting best practices from Europe to future-proof their organizations against regulatory and operational risks.
From Algorithms to Action: The Power of AI & ML in Revolutionizing Cybersecurity Defense
This session explores how artificial intelligence (AI) and machine learning (ML) are transforming cybersecurity defense. We will cover the core concepts of AI/ML, focusing on their ability to analyze vast datasets, detect emerging threats, and respond autonomously in real-time.
Through real-world use cases, attendees will see how AI/ML are used in threat detection systems, automated incident response, and predictive threat modeling. We’ll highlight advancements in AI-driven technologies like autonomous security systems and next-gen threat intelligence.
Additionally, we will discuss key challenges, including algorithmic bias, the need for continuous model training, and the ethical considerations of AI in cybersecurity. By the end of the session, attendees will understand how AI/ML can enhance threat management and how to effectively integrate these technologies into their defense strategies
Beyond AI
Kunstig intelligens (KI) er ikke lenger bare et teknologibegrep – det er en drivkraft i samfunnet vårt. I denne episoden dykker vi ned i hvordan KI former alt fra nyhetsbildet til skolegangen, politikken og måten vi samhandler på som mennesker. Med stadig mer avanserte systemer, har vi fått enorme muligheter – men også et stort ansvar.
Hvordan brukes KI i praksis i dag? Hva skjer når algoritmer styrer hva vi ser, tror – og til og med stemmer på? Vi snakker om alt fra hvordan helsevesenet kan styrkes gjennom KI-støttede diagnoser, til hvordan deepfakes og automatisert desinformasjon skaper usikkerhet og svekker tilliten i demokratiske prosesser.
Men det handler ikke bare om trusler. Vi utforsker også hvordan KI kan hjelpe oss med å løse store samfunnsutfordringer: klimaendringer, ressursstyring, utdanning og sosial inkludering. Alt dette er mulig – hvis teknologien brukes med fornuft og etiske prinsipper.
I studio har vi med oss et spennende panel av fagpersoner og samfunnsaktører:
Hans Olav Sundfør, ordfører i Lillehammer kommune. Han deler sine refleksjoner om hvordan KI kan påvirke og støtte lokal samfunnsutvikling, og hvilken rolle kommuner kan spille i å fremme ansvarlig teknologibruk og mye mer.
Lars Erik Jensen, prosjektleder for Nasjonal KI-komité i Standard Norge. Han gir oss innsikt i det pågående arbeidet med standardisering av kunstig intelligens i Norge, og hvordan dette kan bidra til trygg, rettferdig og bærekraftig teknologiutvikling.
Italo D. Chirico, leder for rådgivning i Advania. Med lang erfaring innen implementering og skalering av digitale og KI-drevne løsninger, deler han refleksjoner om hva som skal til for å lykkes med ansvarlig bruk av kunstig intelligens i norske virksomheter – og hvilke fallgruver som møter både kunder og leverandører.
Denne episoden er for deg som er nysgjerrig, kanskje litt skeptisk, men ønsker å forstå mer om hvordan KI former morgendagens samfunn – og hvordan vi sammen kan sikre at det skjer på en trygg og ansvarlig måte.
Agile by Design: Embedding Responsible AI into Adaptive System Architectures
In an era where artificial intelligence (AI) is rapidly transforming industries, designing system architectures that are both agile and responsible has become paramount. This presentation explores methodologies for embedding Responsible AI principles into system architectures that are inherently adaptable to evolving regulatory landscapes and scalability demands.
By combining agile methodologies with Responsible AI frameworks, organizations can develop systems that are not only efficient and scalable but also ethically sound and legally compliant. This integrated approach ensures that as AI technologies advance, the systems built upon them remain trustworthy and aligned with societal values.

Dr. Neda Maria Kaizumi
Head of AI at Advania Norway
Oslo, Norway
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top