Session
Better data testing with the data (error) generating process
Statisticians often approach probabilistic modeling by first understanding the conceptual data generating process. However, when validating messy real-world data, the technical aspects of the data generating process is largely ignored.
In this talk, I will argue the case for developing more semantically meaningful and well-curated data tests by incorporating both conceptual and technical aspects of "how the data gets made".
To illustrate these concepts, we will explore the NYC subway rides open dataset to see how the simple act of reasoning about real-world events their collection through ETL processes can help craft far more sensitive and expressive data quality checks. I will also illustrate instrumenting such checks based on new features in the dbt-utils package (pending approval of a PR that I recently authored).
Audience members should leave this talk with a clear framework in mind for ideating better tests for their own pipelines.
Prior work inspiring this post come from past blog posts on grouped data checks (https://www.emilyriederer.com/post/grouping-data-quality/), common causes of error in ETL pipelines (https://www.emilyriederer.com/post/data-error-gen/), and in-review PR to dbt-utils (to be reviewed and, per initial communications with dbt team, approved before this conference).
Emily Riederer
Senior Analytics Manager at Capital One
Chicago, Illinois, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top