Session

What If We've Been Scaling Stream Processing Wrong All Along?

Your Kafka Streams application just rebalanced. Again. Your Flink checkpoint is timing out. Again.

Here's an uncomfortable truth: most stream processing applications don't operate at Uber scale. They handle thousands of events per second—complex joins, stateful aggregations, valid use cases—but nowhere near the volumes that justify the operational complexity we've accepted as normal.

Yet we pay the full distributed systems tax anyway. Repartition topics doubling network I/O. Repeated serialization burning CPU cycles. Standby replicas sitting idle. State migration or restoration during deployments. And the human cost: specialized expertise that takes years to develop, expert teams that are expensive to build and painful to lose.

We've normalized extraordinary inefficiency in the name of horizontal scalability that many applications will never need.
But rethinking stream processing in 2026 doesn't mean "just use Postgres."

In this talk, I'll share an early-stage exploration of a different approach. A framework that preserves the Kafka Streams DSL, borrows Flink's approach to exactly-once semantics, leverages Project Loom for high concurrency—and challenges a fundamental assumption that both frameworks share.

This isn't a production-ready announcement. It's an invitation to question conventional wisdom and explore what stream processing could look like when we stop distributing by default.

Hartmut Armbruster

Software Architect, Developer

Berlin, Germany

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top