Speaker

Ben Gamble

Ben Gamble

Technology Sommelier, AI Whisperer

Cambridge, United Kingdom

Actions

A long builder of AI powered games, simulations, and collaborative user experiences, Ben has previously built a global logistics company, Large scale online games and Augmented reality apps. Ben currently works to make fast data and AI a reality for everyone.

Area of Expertise

  • Consumer Goods & Services
  • Finance & Banking
  • Information & Communications Technology
  • Media & Information
  • Transports & Logistics

Topics

  • PROGRAMMING STUFF: Languages & Platforms
  • distributed systems
  • Big Data
  • Mobile Development
  • CLOUD STUFF: Cloud & Serverless infrastructure
  • Event Driven Architecture
  • AWS Lambda
  • Game Development
  • Game Engines
  • Online Gambling
  • Cloud Architecture
  • Apache Kafka
  • Apache Cassandra
  • clickhouse
  • Kafka Streams
  • Kafka Connect
  • Apache Spark
  • Apache Flink
  • Apache Pulsar
  • Apache Iceberg
  • Apache Arrow and Arrow Flight
  • Apache Druid
  • Unreal Engine

A Records ACID trip through kafka

A Record’s ACID trip through Kafka

In this talk I will be exploring how to use Apache Kafka’s in-built transaction API to build transactional microservices, allowing distributed systems to meet processing guarantees.

If there's one thing we know about software, it is that eventually something will fail, and you many lose data.
This is not the end. Designing for failure has brought us many useful innovations like the TCP protocol, the Erlang programming language, and even Apache Kafka itself.
It's so important, that databases have enshrined these properties as ACID compliance.

But what happens when there is more than one system in your transaction? Classically microservices have to do more than just commit changes to one database or one Kafka topic. In addition, how do you maintain exactly once processing guarantees when you may have to deal with systems failing?

Sometimes it's more than just a two phase commit, and when you are dealing with payment systems, being able to act upon a stream continuously while maintaining ACID characteristics and exactly once semantics is mandatory.

Follow along as we see what happens when systems start to degenerate, and what we can and can’t trust at scale. Learn about when to use Kafka as the transaction controller, what can and can’t be stateless, and what are the tradeoffs.

We will explore completion criteria, the routing slip pattern, the outbox pattern, and others as we go on a trip through the various methods of ensuring ACID (atomicity, consistency, isolation, and durability) compliance and exactly once processing, in an asynchronous distributed system. Leave with a few extra tools and patterns about how to make large scale systems reliable

Going Multiplayer with Kafka

Today we’ll walk through building multi-user and multiplayer spaces for games, collaboration, and for creation, leveraging Apache Kafka® for state management, and stream processing to handle conflicts and atomic edits.

We’re not building the metaverse! But as technology matures, ideas jump between disciplines, from ReactJS using ideas from game rendering, to the recent innovations in ECS patterns that borrow heavily from column stores in databases - there’s never been a better time to bring ideas from one sector of software engineering to another. Apache Kafka® makes event management simple so what can we borrow to make it collaborative?

Starting with a simple chat application and working into cursor sharing, collaborative editing and even a multiplayer game, we’ll walk through how to collect and manage user inputs,, and how backing onto an event log allows for version control, undos and time travel. We’ll also explore the various ways you can build up a canonical source of truth in a distributed system, from snapshots, to lockstep sync, to eventual consistency. Along the way we’ll learn a bit about CRDTs (conflict-free replicated data types), mergeable data structures and some of the ways to manage this complexity effectively.

Going AsyncAPI: the good, the bad, and the awesome

In this talk, I’ll explore the good, bad, and awesome aspects of building Async API into our open data hub. As advocates of open source tools, it is our mission to simplify the collection and distribution of streaming data by taking care of everything under the hood, including business-to-business exchange of data and “last mile” delivery to end consumers.

Beginning with a discussion on open API, I’ll walk you through our deliberations, and why we chose AsyncAPI, how it helped us and what it cost. I’ll tell you how we improved our tools to make use of AsyncAPI specs; how we managed the gaps in the specification; along with the benefits.

AsyncAPI spun out of open API with a goal to solve some of its shortcomings. The initiative set out to standardize asynchronous and event driven APIs across the industry. With the proliferation of IoT devices and the connectivity promised by 5G, having standard ways to connect has become more important than ever.

AsyncAPI has been added to every product we host on our open platform. Why? Because we believe AsyncAPI is a good standard for open event based data/APIs, and we want to support a proper way to carry out code generation and validation - with specifications that make sense.

Thousands of software engineers around the world have provided code, documentation, tests, or other improvements to open source projects. With the help of initiatives like AsyncAPI we want to help people liberate their data by tackling the common challenges they face when trying to distribute it.

Ben Gamble

Technology Sommelier, AI Whisperer

Cambridge, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top