Call for Speakers

in 2 months

AI Security Summit - London

event date

14 May 2026

location

CodeNode: 10 South Pl, London EC2M 7EB, United Kingdom London, United Kingdom


The AI Security Summit is coming to London, and we're looking for speakers who have something to show.

AISS isn't a conference where you present a slide deck about the threat landscape. It's where you demo the exploit, walk through the defense you actually shipped, or share the research that changes how people think about a problem. Every talk is 20 minutes. No filler. No "overview of the space." If your talk doesn't include original work: a tool, a technique, a finding, a live demo... It's probably not a fit.

We're specifically looking for practitioners working at the bleeding edge of AI security: adversarial ML researchers with new attack surfaces to reveal, red teamers with novel methodology, builders who've hardened production AI systems and can show how, and engineers solving the hard problems around model integrity, supply chain trust, and secure deployment in regulated environments. The EU AI Act is reshaping what compliance looks like. If you're building the tooling that makes it real (not just auditable), we want to hear from you.

Our bar is high and our acceptance rate is low. We'd rather have 5 incredible talks than 40 forgettable ones. The audience is technical, experienced, and here to learn, not to be sold to. Past AISS speakers have demoed live model exploits, open-sourced defensive tooling on stage, and presented findings that shifted how teams approach AI risk. That's the standard.

20 minutes. Show us what you've built, broken, or discovered. Submit your proposal below.

open, 43 days left
Call for Speakers
Call opens at 12:00 AM

27 Feb 2026

Call closes at 11:59 PM

17 Apr 2026

Call closes in GMT Daylight Time (UTC+01:00) timezone.
Closing time in your timezone () is .

We accept 20-minute technical talks only. No panels, no fireside chats, no "state of the industry" keynotes. Twenty minutes is enough to show something real, and short enough that you can't hide behind filler.

Every accepted talk must include original work. That means: a live demo, a proof of concept, a novel tool, original research findings, or a technical post-mortem. If your talk could be delivered by someone who read the blog post instead of doing the work, it won't make the cut. Show, don't tell.

Topics

We're interested in talks that advance the practice of AI security. Priority areas include, but aren't limited to:

  • Adversarial ML: new attack vectors, novel defenses, and real-world exploitation of production AI systems
  • AI red teaming: methodology, tooling, and findings from systematic adversarial testing
  • LLM security: prompt injection, jailbreaking, data extraction, and defenses that actually work at scale
  • Model integrity and supply chain security: weight poisoning, fine-tuning risks, trust boundaries for shared and open models
  • Secure AI development: architectures, patterns, and tooling for building security into AI systems from day one
  • AI in regulated environments: making compliance auditable and engineering-friendly -- EU AI Act, sector-specific frameworks, and the tooling that bridges policy and practice
  • Defensive tooling: open-source or commercial tools that practitioners can put to work immediately
  • Failure stories: what went wrong, what you learned, and what you changed -- these are some of our most valued talks

If you're working on something at the bleeding edge that doesn't fit neatly into these categories, submit it anyway. We leave room for the unexpected.

What we don't want

  • Product pitches or vendor demos disguised as technical talks
  • High-level overviews of the AI threat landscape (the audience already knows)
  • Talks without a demo, proof of concept, or original findings
  • Content that has already been presented at other conferences without significant new material
  • Speculative talks about theoretical risks without concrete, technical substance

Level of expertise

Talks should target a technically experienced audience. Assume attendees have working knowledge of machine learning, software security, and current AI deployment patterns. You don't need to explain what an LLM is... Get to the interesting part.

Target audience

The Practitioner Track audience includes: security engineers, ML engineers, AI red teamers, application security leads, DevSecOps engineers, and security researchers actively working on AI systems. These are people who build, break, and defend AI in production. They're here to learn something they can apply.

Selection criteria

Proposals are evaluated on:

  • Originality: Is this new work, a new perspective, or a new finding?
  • Technical depth: Does the talk go beyond surface-level discussion?
  • Demonstrability: Can you show it, not just describe it?
  • Practical value: Will an attendee leave with something they can use?
  • Clarity: Is the proposal well-written and specific about what the audience will learn?

We reject more proposals than we accept. A strong proposal is specific: it tells us exactly what you'll demo, what the audience will learn, and why it matters now.


event fee

free for speakers

Login with your preferred account


If you haven't logged in before, you'll be able to register.

Using social networks to login is faster and simpler, but if you prefer username/password account - use Classic Login.