Business Intelligence Data Warehousing Data Integration ETL Azure Data Platform Azure Data Factory Azure Synapse Analytics Microsoft Data Platform SQL Server Integration Services SSIS Biml BimlScript Microsoft SQL Server SQL Server SQL
You already know how to build, orchestrate, and monitor data pipelines in Azure Data Factory. But how do you go from basic, hardcoded data pipelines to making your solution dynamic and reusable?
In this session, we will dive straight into some of the more advanced features of Azure Data Factory. How do you parameterize your linked services, datasets, and pipelines? What is the difference between parameters and variables, and when should you use them? And how does the expression language and built-in functions really work?
We will answer these questions by going through an existing solution step-by-step and gradually making it dynamic and reusable. Along the way, we will cover best practices and lessons learned.
Session Level: 300 / Intermediate
Session Length: 45-75 minutes
Prerequisites: Must have experience with Azure Data Factory development.
Cathrine loves data and coding, as well as teaching and sharing knowledge 🤓 She is based in Norway and works as a Tech Lead and Senior Data Management & Analytics Consultant in Skill, focusing on Data Integration and Data Warehousing. Her core skills are Azure Data Factory, Azure Synapse Analytics, SSIS, Biml and T-SQL development, but she enjoys everything from programming to data visualization. Outside of work she’s active in the Azure and Microsoft Data communities as a Microsoft Data Platform MVP, international speaker, blogger, organizer, and chronic volunteer. She blogs at cathrinew.net and tweets at @cathrinew.
Cathrine is based in Norway, but loves traveling and speaking internationally.