The AI Security Summit is coming to London, and we're looking for speakers who have something to show.
AISS isn't a conference where you present a slide deck about the threat landscape. It's where you demo the exploit, walk through the defense you actually shipped, or share the research that changes how people think about a problem. Every talk is 20 minutes. No filler. No "overview of the space." If your talk doesn't include original work: a tool, a technique, a finding, a live demo... It's probably not a fit.
We're specifically looking for practitioners working at the bleeding edge of AI security: adversarial ML researchers with new attack surfaces to reveal, red teamers with novel methodology, builders who've hardened production AI systems and can show how, and engineers solving the hard problems around model integrity, supply chain trust, and secure deployment in regulated environments. The EU AI Act is reshaping what compliance looks like. If you're building the tooling that makes it real (not just auditable), we want to hear from you.
Our bar is high and our acceptance rate is low. We'd rather have 5 incredible talks than 40 forgettable ones. The audience is technical, experienced, and here to learn, not to be sold to. Past AISS speakers have demoed live model exploits, open-sourced defensive tooling on stage, and presented findings that shifted how teams approach AI risk. That's the standard.
20 minutes. Show us what you've built, broken, or discovered. Submit your proposal below.
We accept 20-minute technical talks only. No panels, no fireside chats, no "state of the industry" keynotes. Twenty minutes is enough to show something real, and short enough that you can't hide behind filler.
Every accepted talk must include original work. That means: a live demo, a proof of concept, a novel tool, original research findings, or a technical post-mortem. If your talk could be delivered by someone who read the blog post instead of doing the work, it won't make the cut. Show, don't tell.
Topics
We're interested in talks that advance the practice of AI security. Priority areas include, but aren't limited to:
If you're working on something at the bleeding edge that doesn't fit neatly into these categories, submit it anyway. We leave room for the unexpected.
What we don't want
Level of expertise
Talks should target a technically experienced audience. Assume attendees have working knowledge of machine learning, software security, and current AI deployment patterns. You don't need to explain what an LLM is... Get to the interesting part.
Target audience
The Practitioner Track audience includes: security engineers, ML engineers, AI red teamers, application security leads, DevSecOps engineers, and security researchers actively working on AI systems. These are people who build, break, and defend AI in production. They're here to learn something they can apply.
Selection criteria
Proposals are evaluated on:
We reject more proposals than we accept. A strong proposal is specific: it tells us exactly what you'll demo, what the audience will learn, and why it matters now.
If you haven't logged in before, you'll be able to register.
Using social networks to login is faster and simpler, but if you prefer username/password account - use Classic Login.