Speaker

AJ Danelz

AJ Danelz

Golang enthusiasts | DevRel | Cloud Native | Streaming

Actions

I am a full-stack engineer and developer advocate, passionate about cloud application design and Event-driven architecture. I enjoy outdoor activities like Disc golf, hiking, camping, and scuba diving. I consider myself a tinkerer, mixologist, and sometimes come up with good ideas.

Area of Expertise

  • Information & Communications Technology

Squash the CQRS monster, Simplify eventing with Kafka, gRPC, and Zilla


The world we all live in is event-driven. Modeling our applications to closely reflect that real-world behavior with technologies like gRPC, Apache Kafka, and Zilla simplifies the reliable communication between event-driven applications and microservices.

CQRS (Command Query Responsibility Segregation) is a complicated name for a simple idea – allow the “read” data model and the “write” data model to differ, so that reads and writes can be optimized independently. Developers typically face challenges when handling the complexity of “write” commands while balancing the performance and freshness of “read” query results. Tackling these issues shouldn’t require complicated systems.

This session will explore how event-driven architectures (EDA) can leverage Zilla as an on-ramp to Kafka, with Protobufs providing a reliable message structure. We will explore enabling backend services to be event-driven while maintaining application-specific endpoints for compatibility. The audience will see how to easily create streaming endpoints, allowing both microservices and end-users to adopt the CQRS pattern.

It Is Time to Reconsider Protobuf

Protobuf adoption remains low despite years of maturity, but not for the reasons most developers think. The real barrier is not complexity or tooling; it is that most developers have only ever worked with JSON and never had a reason to choose something different. Protobuf does not ask you to compete with JSON on its home turf. It asks you to think about your interfaces differently.

The real case for Protobuf isn't serialization speed. It's contract-first development. One `.proto` file drives type generation across every language in your stack, schema drift becomes a lint error, and breaking changes get caught before they ship. Modern tooling (buf, ConnectRPC, protovalidate) has removed every historical friction point. This talk covers the practical path to adopting Protobuf without abandoning REST or JSON where they already work.

The tooling story has changed significantly. buf handles linting, formatting, and breaking change detection in CI. ConnectRPC works over plain HTTP without a proxy. protovalidate puts validation rules directly in the schema. Postman, VS Code, and IntelliJ all have native support.

Audience: polyglot developers, API designers, platform engineers who dismissed gRPC or Protobuf in the past. No prior Protobuf experience required.

Duration: 25 min, adaptable to a lightning talk or 45-min deep dive.

First delivery. No special requirements.

Building a User-Facing Audit Log Archive with OpenTelemetry and DuckDB

Your OTEL traces already contain every business event your users care about: file uploads, payments, and document changes. This talk shows how to filter that data out of the infrastructure noise, store it in S3 or GCS as plain JSONL, and query it with DuckDB. Filtering down to audit events cuts storage by 92%; the DuckDB warm path is 2,200x faster than scanning raw JSONL. No new databases, no managed services, no per-query cost.

Tiered storage is the key architectural decision. Hot queries hit an in-memory cache; warm queries hit a denormalized DuckDB file (23ms); cold queries hit JSONL on object storage (single-digit seconds). Each tier serves a different query pattern without forcing everything through one system.

A single Protobuf schema ties the ingestion pipeline and search API together, so event classification stays consistent as the system evolves.

Audience: backend, platform, and DevOps engineers working with OpenTelemetry. Familiarity with OTEL concepts (spans, attributes) is helpful but not required.

Duration: 30–40 min. Talk includes code examples and benchmark data.

First delivery. No special requirements.

The Mikado Method in the Age of AI Agents

The Mikado Method (attempt a change naively, map what blocks you as a graph, undo, work the leaves first) has always been a sound way to approach large refactors. The reason it doesn't stick is the undo step: discarding hours of work is hard for humans under deadline pressure. AI coding agents invert that cost. A discarded branch costs seconds. This talk covers how to apply the method when the agent handles the attempts and you maintain the graph.

The human's job changes when agents do the implementation. Prerequisite discovery and graph maintenance become the primary skill: writing specs that produce useful failures, turning those failures into graph nodes, and assigning leaf tasks to fresh agent sessions.

This talk walks through how to apply Mikado thinking to AI-assisted development: how to write specs that produce useful failures, how to turn those failures into graph nodes, how to assign leaf-node tasks to fresh agent sessions, and how to know when a branch is done. It also covers the limits: where the graph becomes too expensive to maintain and when a simpler approach is the right call.

Audience: developers and technical leads using AI coding agents (Copilot, Cursor, Claude Code) for non-trivial implementation work. No prior knowledge of the Mikado Method required.

Duration: 25–30 min. adaptable to a lightning talk

First delivery. No special requirements.

AJ Danelz

Golang enthusiasts | DevRel | Cloud Native | Streaming

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top