Session
Data Quality Validations in Fabric Spark
When building a Lakehouse in Fabric Spark (or any other analytics data store), ensuring data quality is crucial. There are many aspects to check and various methods to do so. In this session, we will start with the six dimensions of data quality as defined by DAMA (Data Management Association International): accuracy, completeness, consistency, timeliness, validity, and uniqueness. We will explore examples for each of these categories and examine different ways to ensure data quality in Spark within Fabric.
We will cover how to perform data quality checks using built-in functions in PySpark, how to create reusable functions for data quality checks, and how to use Python modules for data quality validation. When we look at Python modules we will focus on using Great Expectations module for data quality validations.
Additionally, we will discuss the new Fabric feature, materialized views in Spark, which includes built-in data quality checks.
By the end of this session, the audience will have a better understanding of the key considerations for data quality and the various methods to implement validations.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top