Call for Speakers: AI Engineer World's Fair 2025
Shape the Future of AI Engineering!
Join us at the AI Engineer World's Fair 2025, the biggest event dedicated to exploring the cutting edge of AI engineering practices and technologies. We're seeking passionate speakers to share their insights, experiences, and innovations with a global audience of AI engineers, technical founders, and technology leaders.
Share Your Expertise:
We're looking for compelling talks that delve into the practical applications and advancements in AI engineering, with a focus on modern data storage and processing. Ideally, your presentation will be engaging, informative, and inspiring for a technically savvy audience. Standard talk length is 18 minutes (no on-stage q&a).
Selection Process & Speaker Benefits:
- Expert Review: Submissions will be carefully reviewed by the AI Engineer World's Fair Selection Committee.
- Global Exposure: Present your work to a diverse and international audience, establish leadership in the emerging AI Engineering industry
Suggested Tracks:
We encourage submissions across a broad range of AI engineering topics, particularly live demos and launches (with millions of views for our talks online, we are a great stage for high profile launches), stemming from real life experience, real user data, and product-market fit. Talks that simply shill your company product (esp with lazy titles) will be desk-rejected.
Here are non-exhaustive areas of interest for 2025 World’s Fair, that we intend to formalize into tracks:
AI Architects
- Exclusive track for AI Leadership (CTOs, VPs of AI, and AI Architects at >1000 person enterprises)
- Defined by/From Bret Taylor, CEO of Sierra/Chairman of OpenAI
- Hiring and scaling AI Engineer orgs, including Comp/Career Ladders
- Defining AI Strategy and Executing AI Transformations and Pivots
- Compliance, Data Partnerships, and other Legal AI Concerns
- Heuristics and Data for Build vs Buy decisions for AI infra
- If you are the Most Senior AI Person at your company, we want to hear from you about what you did/ what everyone in your role should be doing.
/r/localLlama
- Any topic/high ranking posters from /r/localLlama is welcome
- Launches of Open Weights/Open Source models and nontrivial finetunes
- Adapting open models for business or personal needs (including roleplay)
- Local inference tools (e.g. Ollama/MLX) and platforms (e.g. SillyTavern/LMStudio)
- Personal/private/local agents (inspired by Soumith Chintala’s AIE talk)
Model Context Protocol (MCP)
- Anthropic will be presenting a full overview of the state of MCP
- We want talks on hard problems with MCP integration (new clients, stateful/stateless transports, sampling, auth, o11y, service discovery, hierarchical MCP), and nontrivial demos and external contributions to the protocol
- Talks about A2A (Google's new agent to agent protocol) welcome
- Very high bar for talks about MCP registry/hosting platforms, [my language] SDKs and thin MCP wrappers of existing APIs
GraphRAG
- Neo4j will host this track, continuing from their very well received AIEWF and AIENYC talks on GraphRAG
- We want talks on appropriate use of knowledge graphs to enhance retrieval and generation, architectures and tools for building GraphRAG applications, and real-world GraphRAG research papers / use cases / case studies
- Special call for talks on agent graph memory as well
- All DB and Knowledge Graph speakers are explicitly welcome to apply
AI in Action
- Kevin Ball and Manuel Odendahl will lead this track based on the Latent Space community and AIEWF 2024 workshop
- Practical advice on using all kinds of ai tooling (voice, code, notes, whatever) to improve your productivity
- Users-only: speakers who aren’t trying to sell you products from their employer
- Looking for power users of Cursor, Windsurf, ChatGPT, Lindy, Notion AI etc. to share their life/work productivity hacks
- If you’ve spent way too much time on .cursorrules or similar, apply here
Evals
- Overviews of frontier LLM Evals and trends (e.g. Epoch, LMArena, Artificial Analysis)
- Launch/Updates of new/impactful benchmarks for the industry to align on
- Concrete advice on how AI Engineers can make custom product evals less painful - online AND offline
- LLM-as-Judge and Human-in-the-loop approaches are both needed
Agent Reliability
- Holding a given capability constant (assume that we have good evals), how do we then make it consistent and reliable?
- We are looking for the definitive talk that will shape the industry’s reliability thinking in 2025
- But if you’ve done something interesting at your company, good case studies on how you did it is also needed!
Reasoning and RL
- Train‑Time Sorcery: GRPO, DAPO, DPO —mid‑/post‑training tricks that beat plain PPO. Show curves, share code, commiserate and celebrate.
- Finetune Fight Club: reward modeling, policy distillation, offline RL loops, and when to ditch RL for direct‑pref learning. Bring Unsloth, Axolotl, vLLM, whatever ships.
- Proof‑of‑Thought: fresh reasoning datasets, self‑verifiers, program‑aided reasoning, SAT‑style checkers—make chain‑of‑thought actually chain.
- Cross‑Pollination: academic insights meeting real‑world P&L—open‑source models, closed‑source products, papers that should be products and vice versa.
Retrieval, Search, and Recommendation Systems
- We are looking for the best RAG talks - not just new techniques, but comprehensive, one-stop surveys are very welcome
- We are now also adding LLM-improved RecSys talks given the tremendous growth in the field
- Special callout for notable RAG/RecSys+LLM work if you work at a consumer-facing, household name company!
Security
- Red‑Team Tales: post‑mortems and advice on jailbreaks, prompt injections, and guardrails for LLM safety
- Privacy & Sovereignty: zero‑trust data flows, PII‑/HIPAA‑/FedRAMP playbooks, regional fine‑tunes, air‑gapped inference.
- Trust Layers: auth + billing patterns for multi‑tenant LLM APIs, token quotas, usage‑based pricing, and abuse throttling.
- Model Supply‑Chain Security: signing, SBOMs for weights, reproducible builds, and how to catch a poisoned LoRA before prod.
Infrastructure
- GPU‑less Futures: Neoclouds, Ultrafast/Real-time Inference, and building Data Centers/AI Factories
- Sub‑50 ms Inference: serverless KV‑caches, speculative decoding, tensor‑parallel tricks, and quantization stacks.
- Fleet Orchestration: hot‑swap containers, WASM sandboxes, petabyte‑scale weight distribution, and auto‑suspend/‑resume agents.
- Any “LLM OS” tools that don’t fit in any other track, both hardware and systems software
Generative Media
- Models, Products and Platforms for generating images, audio, and video. What is the state of the art in AI Art?
- Pipeline Craft: control‑nets, style adapters, and prompt‑programming for brand‑safe output at scale.
- Creator Economy: case studies where users made something jaw‑dropping (yes, bring the meme video).
- Ethics & IP (No Snoozing): watermarking, provenance graphs, rev‑share schemes—tell us what actually works.
- What are the most interesting creations made by your users/customers? (AI artists can apply too!)
AI Design & Novel AI UX
- New track for designers building AI-powered experiences
- We are seeking the new Bret Victor of AI-HCI. This is the stage to do it.
- Talks both showing production AI product development process AND novel thought-provoking but not-yet-real demos are fully welcome.
- Special callout for talks on translating Imagegen/Diffusion model design into real products
- If you are a Designer or Design Engineer doing interesting things with AI at work or personally, just submit it.
AI Product Management
- New track for product managers building AI products
- 0→1→N: road‑mapping when the model spec changes weekly—stories of pivots that saved (or sunk) the ship.
- PM ↔ Eng Handshake: interface contracts, eval‑driven backlogs, and killing bad ideas fast without bruised egos.
- Metric North Stars: beyond “accuracy”— experimentation/testing workflows, latency budgets, delight scores, and cost‑to‑serve dashboards that stick.
- The Art of GPT Wrapping: pricing, packaging, and positioning when your “feature” is an LLM anyone can call.
- What AI Engineers need to work well with PMs and vice versa
- If you are an AI PM and want a megaphone to speak to the industry, this is it.
Autonomy, Robotics, and Embodied Agents
- Launches, Research / Demos on LLMs x Robotics: Tesla Optimus, Gemini Robotics, NVIDIA GR00T, Physical Intelligence, Figure, 1x, CloudChef, etc
- If you use LLMs/Transformers in the physical world, this is your track.
Computer-Using Agents (CUA)
- Long running Web Search-, Browser- and other “Computer-Using” Agent launches and architecture breakdowns, e.g. Gemini Deep Research, OpenAI Operator, Anthropic Computer Use, Claude Plays Pokemon, Manus, Rabbit Intern, General Agents, OpenInterpreter-like agents
- Special call for talks on improving Screen Vision accuracy, including new models/tools
- Vision is important for CUA, but the focus is on building general purpose agents that achieve very long-running memory, planning, and autonomy.
SWE Agents
- Both Inner Loop Agents (e.g. Copilot, Cursor, Windsurf, Claude Code) and Outer Loop Agents (eg Devin, Factory, Codegen) primarily built for software engineers (though non-engineers can also use) working on production systems
- Automating software development workflows with AI-powered code agents
- Best practices for AI-assisted debugging, refactoring, and code review
- The role of code agents in accelerating enterprise software development
Vibe Coding
- Code Agents for nontechnical people (e.g. Bolt, Lovable, v0) building ephemeral software and low code prototypes
- Best Practices and How to get out of trouble when vibe coding doesn’t work
- Live demos of Vibe Coding on stage (must rehearse with organizers)
- If you have a hot take (including a well argued NEGATIVE one) or good demo or tool for vibe coding, please apply.
Voice
- Real-time voice AI for any personal/business needs
- New speech-to-text, text-to-speech, speech-to-speech models
- Challenges in voice agent personalization, context retention, function calling
- If you are doing ANYTHING interesting in voice, submit.
Sales/Support Agents
- AI-powered chatbots vs. human-assisted AI for customer support
- Using AI to enhance ticket resolution and customer interactions
- Training support agents with real-world customer data
The Great AI Debates
- A new track modeled after Frankle v Patel at NeurIPS!
- People learn the most from good-faith disagreement.
- Choose an interesting proposition for an Oxford-Style Debate, and apply with 2-4 names - speakers are ideally both knowledgeable and good debaters on their feet. All speakers should be able to attend in person (remote on case by case basis) and will have standard speaker privileges.
- Winners decided by most delta in audience votes
Anything Else
- We want the best talks in AI Engineering, regardless if they fit cleanly in a specific category
- Pick a good enough topic and we’ll form a track around you: borders are imaginary, good talks are forever.
- Reminder that talks that simply shill your company product (esp with lazy titles) will be desk-rejected with prejudice.
Benefits of Speaking:
- Community Impact: Contribute to the advancement of AI engineering by sharing your knowledge and expertise.
- Professional Recognition: Enhance your reputation as a thought leader in the AI engineering community.
- Company Visibility: Showcase your company's expertise and innovation to a global audience.
- Career Advancement: Expand your professional network and open doors to new opportunities.
Deadline for Submissions:
11:59pm April 21, 2025.
All submissions notified by April 30.