Speaker

Sami Alashabi

Sami Alashabi

Solution Architect at Essent

Leiden, The Netherlands

Actions

I am a Solution Architect at Essent & an Associate Manager Data & AI at Accenture, with a passion to unlock value from data.
My main area of expertise lies within the fields of Big Data Analytics, Stream Processing, Cloud Architecture, Microservices and Integrations.
When not architecting or programming, I like to travel and enjoying quality time with family.

Contact and references:
sami.alashabi@essent.nl / sami.alashabi@accenture.com
LinkedIn: https://www.linkedin.com/in/sami-alashabi/

Area of Expertise

  • Energy & Basic Resources
  • Finance & Banking
  • Information & Communications Technology
  • Media & Information
  • Travel & Tourism
  • Business & Management

Topics

  • Event Driven Architecture
  • data mesh
  • Event Streaming
  • Apache Kafka
  • Architecture
  • AWS
  • Confluent
  • Engineering
  • Software Engineering
  • data engineering
  • PySpark
  • Databricks

Unlocking Real-time Data Insights: Building a Kappa Architecture with Kafka

In today's fast-paced, data-driven world, organizations are constantly seeking ways to harness real-time data for better decision-making. One solution that has gained considerable traction is the Kappa architecture, a streamlined approach that leverages Apache Kafka to process and analyze data in real-time, eliminating the need for traditional batch processing. This session explores the setup of a Kappa architecture with Kafka and delves into the numerous benefits it offers in bridging the gap between analytical and operational aspects of organizations.

Kappa architecture, a cousin of the Lambda architecture, is designed to simplify data processing pipelines by exclusively relying on real-time stream processing, thus eliminating the complexities associated with batch processing. Apache Kafka, as a distributed streaming platform, plays a central role in facilitating this architecture, enabling the seamless ingestion, processing, and analysis of data streams.

This session will provide a comprehensive overview of the key components required to establish a Kappa architecture using Kafka, including data ingestion, stream processing, and data storage. It will also address the challenges and best practices for designing a robust and scalable real-time data pipeline using Spark Streaming & Apache Flink.

The adoption of Kappa architecture with Kafka offers several compelling benefits to organisations, which we will delve into with examples:

- Real-time Insights
- Reduced Complexity
- Operational Efficiency
- Scalability, Flexibility & Integration

This session will serve as a guide for organizations looking to transition from traditional batch processing to a real-time data processing paradigm using Kafka and the Kappa architecture. Attendees will gain valuable insights into the practical implementation, best practices, and the transformative potential of this approach, helping them unlock the true power of their data and stay competitive in an ever-evolving landscape.

Building Kafka Connectors with Kotlin: A Step-by-Step Guide to Creation and Deployment

Kafka Connect, the framework for building scalable and reliable data pipelines, has gained immense popularity in the data engineering landscape. This session will provide a comprehensive guide to creating Kafka connectors using Kotlin, a language known for its conciseness and expressiveness.

In this session, we will explore a step-by-step approach to crafting Kafka connectors with Kotlin, from inception to deployment using an simple use case. The process includes the following key aspects:

Understanding Kafka Connect: We'll start with an overview of Kafka Connect and its architecture, emphasizing its importance in real-time data integration and streaming.

Connector Design: Delve into the design principles that govern connector creation. Learn how to choose between source and sink connectors and identify the data format that suits your use case.

Building a Sink Connector: We'll start with building a Kafka sink connector, exploring key considerations, such as data transformations, serialization, deserialization, error handling and delivery guarantees. You will see how Kotlin's concise syntax and type safety can simplify the implementation.

Testing: Learn how to rigorously test your connector to ensure its reliability and robustness, utilizing best practices for testing in Kotlin.

Connector Deployment: go through the process of deploying your connector in a Kafka Connect cluster, and discuss strategies for monitoring and scaling.

Real-World Use Cases: Explore real-world examples of Kafka connectors built with Kotlin.

By the end of this session, you will have a solid foundation for creating and deploying Kafka connectors using Kotlin, equipped with practical knowledge and insights to make your data integration processes more efficient and reliable. Whether you are a seasoned developer or new to Kafka Connect, this guide will help you harness the power of Kafka and Kotlin for seamless data flow in your applications.

Connecting SAP to the Data Lake; Making it look as Simple as it Sounds.

As we transition from on-premises to the cloud, what becomes of our Change Data Capture solution? This shift presents an opportunity to discover innovative ways to create solutions that stand the test of time. Join us as we delve deeper into the process of integrating SAP data into the Data/Delta Lake, with the reliable Apache Kafka at the center of it all.

Through this solution, you will discover how to combine various components of SAP, AWS, and Confluent. The process begins with real-time replication using SAP Landscape Transformation (SLT). This allows connectors set up on self-managed Kafka Connect clusters in AWS Elastic Containers service (ECS) to extract data from delta-enabled SAP sources. Kafka Streams processes crucial transformations, and the final step is completing the puzzle by constructing the data in the Data/Delta Lake. This way the SAP data can be used for real-time analysis and recommendations in client facing applications.

In this session, we examine the benefits of a strong focus on automation in simplifying the implementation process. From building and configuring Kafka Connect on AWS Containers to establishing Confluent clusters and managing role-based access on Confluent Cloud, our entire infrastructure is set up using Infrastructure as Code with Terraform.

Kafka Summit London 2024 Sessionize Event

March 2024 London, United Kingdom

Sami Alashabi

Solution Architect at Essent

Leiden, The Netherlands

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top