Session

Mastering Data Pipelines: Hands-On with Fabric Data Factory

In this session, I'll walk you through the basics of building data pipelines from scratch using Fabric Data Factory. We'll start by covering fundamental data pipeline concepts to build a solid foundation for the practical work ahead. Then, we'll move into a hands-on segment where I'll guide you step-by-step to create a fully functional Extract, Transform, Load (ETL) pipeline.

I'll demonstrate how to use the features and capabilities of Fabric Data Factory, making ETL implementation accessible even if you're new to building data pipelines. By the end of this session, you'll have the knowledge and skills to design, build, and deploy your first ETL pipeline successfully.

We'll kick things off by showing you how to use the new Dataflows Gen2 to extract data from an Azure SQL database. You'll learn how to transform this data using the power query engine in Dataflows Gen2 and load the results into a Lakehouse.

Next, we'll focus on using Data Pipelines to orchestrate various activities. We'll start with making an API call that returns data in JSON format, then transform it using Spark Notebooks, and finally add some extra steps to build a robust pipeline, ensuring everything is ready in the Lakehouse.

This session offers a balanced mix of theoretical knowledge and practical skills, ensuring you leave with a solid understanding of data pipeline construction and the confidence to start your own projects using Fabric Data Factory.

Rui Carvalho

Data Engineer at Devscope

Vila Real, Portugal

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top