Rafal Leszko

Rafal Leszko

Hazelcast, Cloud Software Engineer

Kraków, Poland

Cloud software engineer at Hazelcast, author of the book "Continuous Delivery with Docker and Jenkins", trainer, and conference speaker. He specializes in Java development, Cloud environments, and Continuous Delivery. Former employee in a number of companies and scientific organizations: Google, CERN, AGH University, and more.

Area of Expertise

  • Information & Communications Technology


  • java
  • Kubernetes
  • Cloud Native
  • caching
  • continuous delivery
  • microservices

Distributed Locking in Kubernetes

Some say that there is no such thing as "distributed lock". Still, sooner or later, you'd encounter a problem that only one of your application replicas may execute the given operation at the given time. How to do it right and safe in Kubernetes?

In this session I'll present the following aspects of distributed locking, all in the context of Kubernetes:
- using "Lease" resource as a distributed lock
- using "ConfigMap" resource as a distributed lock
- using distributed locking libraries (Redis, Hazelcast, Zookeeper)
- optimistic vs pessimistic locking
- making locking safe with fencing
- split-brain issue and the rescue with consensus algorithms
- real-life use-cases for distributed locking

Architectural Caching Patterns for Kubernetes

Kubernetes brings new ideas of how to organize the caching layer for your applications. You can still use the old-but-good client-server topology, but now there is much more than that. This session will start with the known distributed caching topologies: embedded, client-server, and cloud. Then, I'll present Kubernetes-only caching strategies, including:
- Sidecar Caching
- Reverse Proxy Caching with Nginx
- Reverse Proxy Sidecar Caching with Hazelcast
- Envoy-level caching with Service Mesh

In this session you'll see:
- A walk-through of all caching topologies you can use in Kubernetes
- Pros and Cons of each solution
- The future of caching in container-based environments

5 Levels of High Availability: from Multi-instance to Hybrid Cloud

Does running your application on multiple machines mean it's highly available? Technically yes, but the term HA is already more than that. Take Kubernetes installation, if you install it on AWS, it's not considered HA unless your master nodes are in different availability zones, not only on different machines. And actually, there is much more on that topic.

In this session I'll present 5 high availability levels:
1. Multi instance
2. Multi zone
3. Multi region
4. Multi cloud
5. Hybrid cloud

I'll discuss real-life use cases we experienced while developing Hazelcast and present examples of the related technical features you may need: in-memory partition backups, zone aware partition groups, WAN replication.

In this session you'll learn:
- Why Kubernetes can be deployed in multiple zones but never in multiple regions?
- What options you have while designing for high availability (for both Cloud and On-premise infrastructures)?
- What are the trade-offs when choosing between high availability and strict consistency?
- What are the best practices for deploying consistent systems in Hybrid Cloud?

Build Your Kubernetes Operator with the Right Tool!

You want to build a Kubernetes Operator for your software. Which tool to choose? Operator SDK with Helm, Ansible, or Go? Or maybe start from scratch with Python, Java, or any other programming language? And what is the right phase in the Operator Capability/Maturity Model that you should provide?

In my talk I'll present:
- Different ways of building Kubernetes Operators
- Demo of building the same Operator using different tools
- Methods used by the most popular Operators (Couchbase, Prometheus, MongoDB)
- Operator Capability Model and how it affects your toolkit
- Our journey with Hazelcast Operator

Rafal Leszko

Hazelcast, Cloud Software Engineer

Kraków, Poland

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top