Session
Modernizing Your Databricks Engineering: Using Lakeflow & Declarative Pipelines
The data landscape isn’t slowing down. Cloud platforms evolve, AI raises the bar, and modern data stacks demand engineers who can build pipelines that are scalable, cost-efficient, and resilient. This full-day hands-on workshop takes you from fundamentals to production-ready practices using Lakeflow Declarative Pipelines (formerly known as DLT) plus Databricks orchestration tools.
You’ll learn how to ingest and transform raw data with Lakeflow, orchestrate and monitor workloads, and apply real-world optimization techniques that save time and money. By the end of the day, you’ll have built your own end-to-end pipeline on Databricks and walked away with frameworks you can apply immediately to your organization’s projects.
What You’ll Learn
-How to design and implement Medallion‐architecture pipelines using Lakeflow Declarative Pipelines.
-How to orchestrate, schedule, and monitor your workloads in Databricks.
-Techniques for data quality, schema enforcement, and governance.
-Optimization patterns for performance and cost savings in the cloud.
-A framework for evaluating new tools and practices in a rapidly changing, AI‐driven world.
Format
Length: Full day (6.5 hours, including breaks)
Skill Level: Intermediate (familiarity with SQL or Spark recommended)
Chris Gambill
Founder | Gambill Data | Fractional Data Strategist and Leader
Knoxville, Tennessee, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top