Session

Building reliable data processing pipelines with Azure Databricks Delta live tables

Azure Databricks Delta Live Tables is a framework for building reliable, maintainable, and testable data processing pipelines.

You define the transformations to perform on your data, and Delta Live Tables manages task orchestration, cluster management, monitoring, data quality, and error handling on its own.

Instead of defining your data pipelines using a series of separate Apache Spark tasks, Delta Live Tables manages how your data is transformed based on a target schema you define for each processing step. You can also enforce data quality with Delta Live Tables with a feature called "expectations". "Expectations" allow you to define expected data quality and specify how to handle records that fail those expectations.

In this session, we will learn how to develop a data processing pipeline with Azure Databricks Delta live tables.

Who is the target attendee?
Data Architect/ETL Developers/Consultants

Why would that person want to attend your session?
- To learn the modern way to develop the data processing pipelines

What can the attendee walk away with?
- What is an Azure Databricks Delta live table?
- Delta Live Tables concepts.
- How to Develop Data processing pipelines with Delta Live Tables?

Rajaniesh Kaushikk

Director Technology | Microsoft MVP | Databricks Champion | MCT | Author | Blogger

Dunellen, New Jersey, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top