Call for Speakers

in 2 months

[un]prompted - The AI Security Practitioner Conference

event starts

3 Mar 2026

event ends

4 Mar 2026

location

Salesforce Tower San Francisco, California, United States


Whether you’re a CISO Excel jockey or a researcher sniffing for the scent of bits, we see you as part of our wider AI security practitioner community.

[un]prompted is an intimate, raw, and fun gathering for the professionals actually doing the work, from offense to threat hunting to program building to national policy. No fluff. No filler. Just sharp talks, real demos, and conversations that matter.

Let’s take AI back from the marketers.

What You Can Expect

[un]prompted was created by volunteers behind events such as Prompt||GTFO, fwd:cloudsec, ACoD, and ISOI.

We care about seeing what actually works for you in AI. As long as we keep the fluff out, talks can cover any topic from the deeply technical to national policy.

All sessions are 20 minutes long (+10 for questions), with an option to submit lightning talks at 10 minutes. Demos are encouraged, slides allowed but to be limited to the bare minimum.

Check the conference, our CFP, and review board on our site, here:https://unpromptedcon.org/

open, 36 days left
Call for Speakers
Call opens at 12:00 AM

15 Dec 2025

Call closes at 11:59 PM

28 Jan 2026

Call closes in Pacific Standard Time (UTC-08:00) timezone.
Closing time in your timezone () is .

AI in cybersecurity has moved past the proof-of-concept phase. Organizations are deploying agents in production, security teams are automating with LLMs, and attackers are already exploiting AI systems. But we're drowning in hype, vendor pitches, and CTF demos that don't translate to real environments.

This vendor-agnostic conference brings together practitioners building and breaking AI systems with leaders managing the strategic and organizational challenges. We want specifics: What actually works? What failed spectacularly? How is 2030 shaping up to look different from 2025? We're looking for 10-minute lightning talks and 25-minute deep dives that share real implementation details, hard-won lessons, and honest assessments. Not sales pitches or theoretical exercises.

NOTE: We strongly recommend you check out the main website for direct commentary from our review board for what they would like to see.

Presentation Tracks

TRACK 1: Building Secure AI Systems

  • How are you architecting, testing, and operating AI/ML systems in production? We want the technical details others can learn from.
  • Topics We’re Looking For:
  • Prompt injection defenses (simple and complex multi-agent scenarios)
  • Sandboxing approaches, action/intention quantification for agents
  • SDLC/MDLC security practices that actually work
  • AI supply chain risk management
  • Security evaluation frameworks and datasets beyond red team CTFs
  • Data poisoning, backdoors, model collapse, recursive pollution
  • Real transparency solutions (not black boxes)
  • Failures: what didn’t work and why

TRACK 2: Attacking AI Systems

How are AI systems actually being compromised in the wild?

Topics We’re Looking For:

  • Prompt injection chains across multiple systems/agents
  • Adversarial attacks on models in production environments
  • Reinforcement learning exploitation and reward hacking
  • Training data extraction, backdoors, poisoning
  • Zero-day vulnerabilities specific to AI architectures
  • Practical examples from real engagements (not sanitized demos)

TRACK 3: Using AI for Offensive Security

Show us AI tools you’ve deployed (or tried to deploy) for offensive security.

Topics We’re Looking For:

  • Autonomous red teaming
  • Vulnerability discovery and exploit generation
  • Cost-effective fuzzing
  • Hackbots and agentic penetration testing on production systems (not CTFs)
  • Agentic workflows for research and attack

TRACK 4: Using AI for Defensive Security

Show us AI tools you’ve deployed (or tried to deploy) for defensive security:

Topics We’re Looking For:

  • Threat detection, incident response automation, vulnerability management at scale
  • Agentic systems for threat hunting, log analysis, triage
  • Automated threat modeling, secure design assistance
  • ROI reality check: what did AI replace vs. assist vs. make harder?
  • Deepfake detection, deception as defense
  • Augmenting or replacing security governance tools (audit evidence gathering, GRC automation, etc.)
  • Tools your teams built internally that replaced vendors
  • War stories and lessons learned

TRACK 5: Strategy, Governance & Organizational Reality

What are leaders grappling with as they deploy, govern, and scale AI?

Topics We’re Looking For:

  • Talking to executives and boards about AI security risks and ROI
  • Shadow AI: detection, management, policy that works
  • Measuring what matters: benchmarks, metrics, real outcomes
  • Skillsets needed and how to train non-early-adopters
  • Enterprise challenges: immature tooling, lack of admin panels, configuration nightmares
  • Regulatory compliance and policy navigation
  • Organizational change management for AI adoption
  • What are boards actually asking for?

TRACK 6: Practical Tools & Creative Solutions

Show us the AI tools, prompts, and workflows you’ve built that make your job easier—even if they’re not enterprise-grade or polished products. This is the “here’s something neat I made, you might find it useful” track.

Topics We’re Looking For:

  • Custom GPTs, artifacts, and agents for specific security tasks
  • Prompt engineering techniques that actually work (the ones most people don’t know about)
  • Open-source tools and scripts that solve real problems
  • Workflow automations using AI that save you time
  • Engineering patterns that work well for AI-assisted development
  • Examples: threat modeling assistants (like StrideGPT), analysis tools (like RAPTOR), triage helpers, evidence collectors, documentation generators
  • We want to see what is tested and works well for you in real practice, not a CTF or theoretical demo


Submission Guidelines

What Makes a Strong Submission:

  • Specific examples with enough detail others can apply
  • Honest assessment of what worked and what didn't
  • Data, metrics, or real-world validation (not vibes)
  • Clear takeaways for attendees in similar situations
  • Acknowledgment of tradeoffs and limitations

What We're NOT Looking For:

  • Vendor product pitches
  • Purely theoretical attacks with no production context
  • Hype without implementation details
  • Talks that could have been a blog post

Up to three hotel nights covered.


Login with your preferred account


If you haven't logged in before, you'll be able to register.

Using social networks to login is faster and simpler, but if you prefer username/password account - use Classic Login.