Session

Adversarial AI: Testing Your Models Like a Pentester Would

As African tech companies rapidly adopt AI, we're building systems that think and make decisions. But are we testing them like the critical infrastructure they're becoming?
While QA teams excel at testing functionality and performance, AI systems face threats that traditional testing misses entirely. Pentesters have been breaking AI models for years using prompt injection, model poisoning, and adversarial inputs - but this knowledge rarely reaches QA teams.
In this session, you'll learn to think like an attacker when testing AI systems. We'll explore real examples of AI vulnerabilities in production systems and discover how simple prompts can bypass filters, corrupt model behavior, and leak sensitive data.
You'll learn:

Common AI attack vectors QA should test for
Practical adversarial testing techniques
How to integrate AI security into existing workflows
Real case studies from African startups

Takeaways:
AI security testing checklist
Understanding how attackers target AI
Actionable techniques you can implement immediately

This session bridges traditional QA and AI security, ensuring your systems are functional AND resilient against attack. Perfect for QA practitioners future-proofing their skills as AI becomes central to African tech.

Tomiwa Falade

Offensive Security Engineer

Lagos, Nigeria

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top