Speaker

Tomiwa Falade

Tomiwa Falade

Offensive Security Engineer

Lagos, Nigeria

Actions

Tomiwa is an Offensive Security Engineer with a passion for exploring how attackers think, build, and operate. His work spans red teaming, adversary simulation, and security research, where he experiments with modern techniques for bypassing defenses and uncovering real-world attack paths.

With experience designing custom tools, running offensive security labs, and translating complex concepts into clear insights, he helps both technical and non-technical audiences understand the evolving threat landscape. He has spoken on topics ranging from ethical hacking and malware trends to practical steps for building resilient applications and infrastructure.

Beyond offensive security, he is a strong advocate for knowledge-sharing across the wider tech community whether mentoring aspiring security professionals, engaging with developers on secure coding practices, or contributing to open discussions on the future of cybersecurity.

Area of Expertise

  • Business & Management
  • Finance & Banking
  • Government, Social Sector & Education
  • Information & Communications Technology
  • Media & Information

Topics

  • Cybersecuirty
  • AI and Cybersecurity
  • cybersecurity awareness
  • Cybersecurity Threats and Trends
  • cyber security
  • Emerging Cybersecurity Topics
  • Offensive Security
  • Hacking
  • cyber security awareness
  • Social Engineering and Phishing:
  • Malware
  • Penetration Testing & Ethical Hacking
  • Secure Coding & Code Review Practices
  • Secure Coding & Cybersecurity
  • Identity & Access Management
  • Application Security
  • Web Application Security
  • Zero Trust Architecture
  • Developer
  • Google Developer Group
  • open source
  • Google Devfest
  • cybersecurity

Adversarial AI: Testing Your Models Like a Pentester Would

As African tech companies rapidly adopt AI, we're building systems that think and make decisions. But are we testing them like the critical infrastructure they're becoming?
While QA teams excel at testing functionality and performance, AI systems face threats that traditional testing misses entirely. Pentesters have been breaking AI models for years using prompt injection, model poisoning, and adversarial inputs - but this knowledge rarely reaches QA teams.
In this session, you'll learn to think like an attacker when testing AI systems. We'll explore real examples of AI vulnerabilities in production systems and discover how simple prompts can bypass filters, corrupt model behavior, and leak sensitive data.
You'll learn:

Common AI attack vectors QA should test for
Practical adversarial testing techniques
How to integrate AI security into existing workflows
Real case studies from African startups

Takeaways:
AI security testing checklist
Understanding how attackers target AI
Actionable techniques you can implement immediately

This session bridges traditional QA and AI security, ensuring your systems are functional AND resilient against attack. Perfect for QA practitioners future-proofing their skills as AI becomes central to African tech.

Tomiwa Falade

Offensive Security Engineer

Lagos, Nigeria

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top