Naci Simsek
Ververica, Manager - Customer Success Technical Engineering
Düsseldorf, Germany
Actions
Naci Simsek is a Senior Customer Success Technical Manager at Ververica with over 17 years of experience in IT and Telecom. He began his career as a Customer Support Engineer at Nortel Networks, advancing through roles as Software Engineer, Engineering Team Lead, Project Manager, and Solutions Architect at Huawei. Over nearly a decade, he specialized in customer-facing big data solutions as a Platform Engineer, BI Engineer, and Data Engineer. In his current position, he supports customers in leveraging Apache Flink for real-time data streaming across on-premises and cloud environments.
He holds a Bachelor’s degree in Computer Engineering from Ege University, an MBA from Bahcesehir University, and the PMP® certification.
Links
Area of Expertise
Topics
The Flink Mistake Playbook: 2 Years of Real-World Debugging
Even seasoned engineers can encounter unexpected challenges when building with Apache Flink, often leading to performance bottlenecks or stability issues. This session confronts three critical, yet common, mistakes that frequently trip up Flink users and significantly impact pipeline health. Attendees will learn how to effectively navigate Kafka connector migration rules to prevent uncontrolled state growth, optimize serialization to eliminate throughput-sapping Kryo fallbacks, and implement strategies for even load distribution across their Flink clusters. The talk offers practical, actionable insights, empowering developers to avoid these pitfalls and build truly robust, high-performance Flink applications.
Lets Deeploy Flink - Uncovering Hidden Depths of Yarn, Docker, Kubernetes & Beyond
Choosing the right environment to run Apache Flink applications—Standalone, Docker, Kubernetes, or YARN—is a critical first decision that often presents a bewildering array of options. This session cuts through the complexity, providing a practical, in-depth comparison of these core deployment modes. The talk delves into their fundamental differences, evaluates their respective pros and cons, and pinpoints when each mode truly shines (and when to avoid it). Engineers will leave equipped with the clarity to confidently select the optimal deployment strategy, ensuring their Flink applications are set up for success from the start.
Data Lakes on Flink: Hudi, Iceberg, Paimon - Decoding the Deluge of Formats
Building robust data lakes with Apache Flink often leads to a crucial decision: selecting the right open table format from options like Apache Hudi, Iceberg, and Paimon. This session cuts through the common confusion, providing a head-to-head comparison of these seemingly similar technologies. Attendees will learn the three key distinctions vital for making an informed choice: how each format organizes data under the hood, their respective optimizations for various update and query patterns, and the nuances of their integration with Flink. The talk equips engineers with the practical understanding needed to confidently select the best-suited format and build an optimized Flink-powered data lake architecture.
Build Real-Time ML & AI Pipelines with Apache Flink
This session provides a comprehensive guide to designing and building real-time machine learning and AI pipelines from the ground up using Apache Flink. It illuminates how Flink's unified streaming model, powered by its DataStream and SQL APIs, enables low-latency inference with both remote and embedded models. Attendees will witness live demonstrations of real-time training scenarios, such as click-event streaming and vector embedding updates, and learn to stitch together powerful Generative AI workflows with Retrieval-Augmented Generation. The talk equips engineers with a concrete, battle-tested blueprint for deploying scalable, production-grade ML/AI applications in a true streaming paradigm.
The Need for Speed: Optimizing Apache Flink for Low-Latency Stream Processing
While Apache Flink excels at low-latency stream processing, achieving optimal end-to-end performance often requires specialized tuning beyond standard configurations. This session dives deep into critical best practices and advanced techniques for drastically reducing latency in Flink applications, ensuring your pipelines operate at peak efficiency. Attendees will explore direct latency optimizations, including resource allocation, state backend choices, and network buffer tuning, alongside strategies to improve throughput that inherently enhance real-time responsiveness. The talk provides a battle-tested blueprint for understanding, measuring, and systematically eliminating bottlenecks, empowering engineers to build truly high-performance Flink solutions.
Let’s Matter-ialize Flink Batch and Streaming
Managing dual architectures for batch and stream processing often leads to operational headaches and data inconsistency. It’s time to unify your approach. In this session, we will deep-dive into Apache Flink Materialized Tables, a revolutionary feature designed to drastically simplify how you develop and manage data pipelines within a single SQL environment.
We will go beyond the surface to demonstrate how Flink automatically handles complex transformations and guarantees data freshness based on your specific targets. You won’t just learn the theory; we will explore the mechanics of flexible refresh modes—comparing continuous streaming updates against full batch reloads—to suit your specific latency needs.
By the end of this talk, you will gain practical insights into creating, managing, and optimizing these tables. You will leave equipped to build a more consistent, efficient data architecture where your engineering efforts yield real results, and where your data goals are finally Matter-ialized.
Flink Forward 2025 Barcelona Sessionize Event
Data Streaming Summit San Francisco 2025 Sessionize Event
Data Streaming Summit Virtual 2025 Sessionize Event
Naci Simsek
Ververica, Manager - Customer Success Technical Engineering
Düsseldorf, Germany
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top