Call for Papers
Have a compelling technical story, innovative application, or a visionary idea in data streaming? Now’s the time to share it.
Speaking at Current 2025 is an opportunity to engage with thousands of peers, deepen your involvement in the data streaming community, and position yourself as a thought leader in the industry. We're looking for talks that are informative, technical, and inspiring—so bring your engineering spirit and unique insights!
Why speak at Current 2025
As a speaker, you'll enjoy these exclusive benefits:
- Free conference pass
- Invitation to the Speaker Dinner
- Your headshot, bio, and session featured on the Current 2025 website
- Session promotion across YouTube, X (formerly Twitter), and LinkedIn
- The chance to share your expertise with fellow thought leaders in the community
Important Dates
- Call for Papers Opens: November 7, 2024
- Speaker Office Hours:
November 26 | December 11| December 17 (08:00 - 09:00 IST each day)
- Call for Papers Closes: December 19, 2024
- Speaker Notifications Sent: January 17, 2025
Dates subject to change.
Abstract Submissions
We're seeking talks on a wide range of topics related to data streaming, including real-time systems, event-driven design, dedicated data layers, and machine learning platforms. Think of technologies like Kafka, Flink, and similar tools. We encourage sessions on topics such as:
Topics for consideration:
Platform and Architecture
From beginner to advanced. Share the patterns, frameworks, and methods your organization uses for data streaming. Discuss upcoming features, migrations, and other insights or production wars. Think about solutions like Incremental processing solution, Write-ahead-log, operational analytics, exactly once guarantees, etc.
Stream Processing
You’ve got data in streams/topics… Now what? Tell us about your approaches to extracting value from data in motion using tools like Apache Flink, Kafka Streams, and more.
The Analytics Estate
Cover open table formats open table formats (Apache Iceberg, Apache Hudi, Apache Paimon, Delta Lake). Querying “data in motion” and “data at rest”, visualize insights, making analytics suck less.
Stream Governance
Discuss governance pillars—stream catalog, lineage, quality, schema registry—and how “shift left” impacts pipeline development.
Operations and Observability
From provisioning to securing data streaming platforms, share insights into self-service approaches, end-to-end testing, and best practices. Perhaps you have a story of how GitOps has enabled a “self-service” approach for development teams, or you would like to showcase your organization’s investment in observability. What about end-to-end testing with TestContainers?
Security
From the basics of how to secure your stream to building DevSecOps on a platform - we will explore security as “job zero” for a data streaming platform.
Data Integration
Cover techniques, frameworks, and case studies on moving data between systems. Think Kafka Connect, Airflow, Apache NiFi, etc.
AI Applications
Explore how data streaming powers AI-driven solutions in real-time environments. Share insights on integrating ML models with streaming platforms for live predictions, automating model retraining, or scaling AI workloads with tools like Flink ML, or TensorFlow, RAG pattern, AgenticAI, etc. We’re interested in innovative stories, from real-time anomaly detection, GenAI and recommendation systems to optimizing event-driven AI models and pipelines.
Data Catalogs
Data catalogs are an essential part of any data infrastructure. Have best practices, innovations, or a challenge you would like to share? Think Apache Hive, Hive metadata, DataHub, Amundsen, Apache Atlas and others.
Case Studies
Share your organization’s experience with streaming, from problem-solving to adoption lessons. All case studies are eligible for a Data Streaming Award nomination (details below).
Submission Guidelines
At this stage, please submit a title and compelling abstract that clearly outlines the key points of your talk in under 400 words. Additionally, include a short bio (150 words or less) to introduce yourself and establish your experience and expertise in the field.
Your abstract should briefly cover:
- Core Theme: What central idea or concept will you be discussing?
- Technical Depth: Indicate the technical level and any specific tools or methodologies you’ll address.
- Relevance: Explain how this topic applies to real-world data streaming scenarios.
- Audience Takeaways: Describe the actionable insights or knowledge the audience will gain.
Be concise and captivating! This abstract will help us understand the essence of your talk and how it aligns with the event’s goals.
Here are some example talk titles:
- "Stories from the Trenches: Overcoming LLM Hallucinations with RAG"
- "Our Shift from Batch to Real-Time at Gotham Bank—Successes and Setbacks"
- "How $STREAMING_TECHNOLOGY Revolutionizes Developer Workflows"
- "Event Sourcing vs. Event Streaming: Conceptual Similarities, Practical Differences"
- “An overview of AI Applications”
- “Tiered storage deep dive”
- “Streaming ETL at ACME_CORP - here’s what we built”
- “Rewiring your ‘relational brain’ for Event Sourcing”
- “How Nidavellir Enterprises manages schema change in a multi-team streaming environment”
- “Getting started with $STREAMING_TECHNOLOG”
- “We blinked and it was gone: Ops stories from running streaming systems in production at Vormir Corp.”
- “Who needs stream processing when you’ve got Postgres?”
- “Deep dive into components of a data streaming platform”
Talk Types
Choose a session format that best communicates your story:
- Lightning Talk: 10 minutes (1 speaker)
- Show Me How: 20-minute live coding session, no slides (1 speaker)
- Breakout Session: 30 minutes (includes 5 mins Q&A, up to 2 speakers)
- Live Podcast Recording: 30 minutes (up to 5 speakers, representative of the diverse data streaming community)
FAQs:
Q: What is Current Bengaluru 2025: The Data Streaming Event (previously Kafka Summit)?
A: Global event that brings together leading experts, researchers and open source contributors in data streaming technologies such as Kafka, Flink, and Iceberg.
Q: Will, at some point in future, the suffix "previously Kafka Summit" be dropped from the summit name?
A: The suffix “previously Kafka Summit"" would likely change in the future as the phrasing underlines that Kafka Summit is growing into something broader (so the phrasing will seem less appropriate in 3 or 5 years time). But Current will always include Kafka Summit.
Q: I am a new speaker, is there someone I can speak with regarding submitting?
A: Yes! Join our Speaker slack channel for asynchronous mentoring.
Q: Who should attend Current Bengaluru 2025: The Data Streaming Event (previously Kafka Summit)?
A: Kafka engineers, Flink engineers, data streaming engineers, data engineers, Java developers, Python developers, machine learning engineers, researchers and practitioners, business executives, IT decision-makers, product managers, entrepreneurs, investors and more.
Q: What are the Data Streaming Awards?
A: This industry award celebrates outstanding data streaming case studies. Winners will be announced at the US edition of Current and evaluated by an expert panel. (See past winners).
Q: Will I receive updates on my submission?
A: Yes, you’ll receive email updates on your submission status.
Q: What is the “Show Me How” session?
A: A 20-minute live coding session where you demonstrate building a solution/how to’s. No slides—just code and problem-solving in real-time.
Ready to Submit?
Bring your best ideas, make them unforgettable, and help shape the future of data streaming at Current Bengaluru 2025!
Questions? Please email organizers@current.io