Session

Efficient data pipelines using Delta Live Tables

Implementing data pipelines efficiently in Databricks is not easy and I will explain how to use Delta Live Tables (DLT) to reduce the time needed for implementation, and at the same time let the DLT framework handle the boring stuff like task orchestration, cluster management, monitoring, data quality and error handling

Magnus Johannesson

Solution Architect @ Pro Analytics

Göteborg, Sweden

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top