
Zach Stagers
Head of Data Engineering at Advancing Analytics
Kirkby Lonsdale, United Kingdom
Actions
Zach, currently leading the Data Engineering practice at Advancing Analytics, has well over a decade of experience across a range of industries in the Data & AI space. Throughout his career, he has implemented a range of analytical platforms from traditional Microsoft Business Intelligence to cutting edge Data Lakehouse platforms, tackling a huge variety of use cases.
Links
Area of Expertise
Topics
Advanced Spark for Data Engineers
Are you ready to take your Spark skills to the next level? This one-day, hands-on workshop is designed for data engineers who want to master advanced Spark techniques and build scalable, high-performance solutions. Whether you’re working with Databricks, Fabric, or both, this training will empower you to move beyond the basics and tackle real-world challenges with confidence.
By the end of this workshop, you will be able to
- Write clean, modular Spark code with reusable functions and dynamic transformations.
- Apply advanced coding techniques to handle schema evolution, complex file formats, and secure workflows.
- Manage secrets effectively, integrating sensitive credentials seamlessly into Spark pipelines.
- Optimize and tune Spark jobs, understanding internal execution plans to maximize performance.
- Utilize advanced Databricks features, including Unity Catalog, Auto Loader, Repositories, and Databricks CLI & DBConnect, to streamline development and governance.
This workshop is designed for
- Data engineers with foundational Spark knowledge who want to expand their expertise.
- Professionals building ETL pipelines, enabling real-time analytics, or processing large-scale datasets.
- Engineers working in Databricks, Fabric, or similar platforms, seeking advanced skills to enhance scalability and performance.
Let’s elevate your Spark skills and transform the way you approach data engineering. Join us for this intensive workshop and unlock the full potential of Spark!
A Data Engineer's Guide to Azure Synapse
There has been an explosion of interest in Azure Synapse Analytics as everyone races to get to grips with the all-in-one data analytics platform. But when opening up the box, you find it's a lot more complex that it's made out to be, with several different powerful compute engines, each with their own idiosyncrasies! Why do we have different flavours of each engine? When should you use Spark pools over SQL? What's the most cost effective approach for different scenarios? What types of users should be using each service? The answers to these questions aren't always met with clarity!
This training day breaks down the Synapse workspace into it's component parts and provides a foundation of knowledge for each piece. During the day, we will cover:
- Fundamentals of building a Lake-based analytical platform - how you structure a lake, what file format to choose, what kinds of data work it's suited for
- How the SQL Pools work, patterns for optimising performance and cost and how we can use our SQL endpoints to integrate with other services
- The Synapse Spark engine, demonstrating how you can write dynamic workflows in Python or bring your existing SQL logic to spark directly.
- Data Explorer pools and how you can use it for deep exploration of logs, time series and other fast-moving unstructured data sources
- Synapse Integrations, how you can take your workspace and integrate directly with tools such as Azure Purview, CosmosDB and the wider Dataverse
There is a huge amount to cover, but you'll be guided by Data Platform MVP Simon Whiteley & veteran analytics consultant Zach Stagers, both of whom have deep knowledge across the whole of this wide and sprawling tech stack.
Data Lakehouse Serving Options
The lakehouse design paradigm is becoming more mainstream as tools like Azure Synapse and Databricks make querying lake-based data simple, and bring a huge amount of power. But there is yet to emerge a defacto industry standard in serving a lake-based model to end users, and with a growing number of options it can be difficult make a decision that best fits your scenario.
In this session we will establish a benchmark of what we're looking for in a good data serving layer. We will then review the serving options currently available to evaluate how well they meet our requirements, any additional benefits and some potential pitfalls and things to avoid with each. You will leave the session equipped with everything you need to open up your lake to the wider business and harness the power of the lakehouse model.

Zach Stagers
Head of Data Engineering at Advancing Analytics
Kirkby Lonsdale, United Kingdom
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top