Speaker

Chris Gambill

Chris Gambill

Founder | Gambill Data | Fractional Data Strategist and Leader

Knoxville, Tennessee, United States

Actions

Chris Gambill is a Data Architect and Founder of Gambill Data, with over 25 years of experience helping organizations modernize their data platforms. He has led migrations from on-premises systems to cloud platforms like Azure and Databricks, designed scalable architectures using the Medallion model, and built pipelines that power analytics and AI adoption.

Chris has worked across industries including manufacturing, aviation, and telecommunications, serving as both a hands-on engineer and an associate director. Today, he focuses on fractional data leadership, helping companies get senior-level strategy without the overhead, and on coaching the next generation of data engineers through his YouTube channel, “The Data Engineering Channel.”

Passionate about bridging the gap between technical teams and business leaders, Chris brings a no-nonsense, ROI-focused approach to data strategy, governance, and engineering.

Area of Expertise

  • Business & Management
  • Information & Communications Technology

Topics

  • Data Engineering
  • Databricks
  • Big Data
  • All things data

Modernizing Your Databricks Engineering: Using Lakeflow & Declarative Pipelines

The data landscape isn’t slowing down. Cloud platforms evolve, AI raises the bar, and modern data stacks demand engineers who can build pipelines that are scalable, cost-efficient, and resilient. This full-day hands-on workshop takes you from fundamentals to production-ready practices using Lakeflow Declarative Pipelines (formerly known as DLT) plus Databricks orchestration tools.

You’ll learn how to ingest and transform raw data with Lakeflow, orchestrate and monitor workloads, and apply real-world optimization techniques that save time and money. By the end of the day, you’ll have built your own end-to-end pipeline on Databricks and walked away with frameworks you can apply immediately to your organization’s projects.

What You’ll Learn

-How to design and implement Medallion‐architecture pipelines using Lakeflow Declarative Pipelines.

-How to orchestrate, schedule, and monitor your workloads in Databricks.

-Techniques for data quality, schema enforcement, and governance.

-Optimization patterns for performance and cost savings in the cloud.

-A framework for evaluating new tools and practices in a rapidly changing, AI‐driven world.

Format
Length: Full day (6.5 hours, including breaks)

Skill Level: Intermediate (familiarity with SQL or Spark recommended)

Modern Data Engineering with Lakeflow Declarative Pipelines & Databricks Orchestration

Cloud platforms continue to evolve, AI raises the bar, and modern data stacks demand engineers who can build pipelines that are scalable, cost-efficient, and resilient. The one that covers all this and is fastest to the finish line wins. This abbreviated hands-on workshop takes you from fundamentals to production-ready practices using Lakeflow Connect to get your data from source to Unity Catalog quickly and cheaply.

You’ll learn how to ingest raw data with Lakeflow, orchestrate and monitor workloads, and apply real-world techniques that save time and money. By the end of the session, you’ll have walked away with frameworks you can apply immediately to your organization’s projects.

What You’ll Learn

-How to design and implement Lakeflow Connect Pipelines.

-How to orchestrate, schedule, and monitor your workloads in Databricks.

-A framework for evaluating new tools and practices in a rapidly changing, AI‐driven world.

Adapt or Be Automated: Continuous Learning in the Age of AI and Data Engineering

In data engineering, the tools never stop changing. From on-premises ETL to cloud-native pipelines, from dashboards to AI-driven insights, the landscape evolves faster than most teams can keep up. But the data engineers who thrive aren’t the ones who know a single tool inside out; they’re the ones who adapt, learn, and apply new practices as the field transforms.

In this session, we’ll explore why adaptability is the most critical skill in the AI era. You’ll learn how to evaluate new technologies, when to embrace the latest innovations (like generative AI for pipeline automation), and when to stick with proven practices. Real-world stories from 25 years in the field will highlight how continuous learning turned potential failures into successful, future-ready data projects, and has kept me in a career that I love.

Agenda (60 minutes)

The only constant: change in data engineering (5 min).

A short evolution tour (10 min): from DTS → SSIS → ADF → Databricks & Fabric.

How AI changes the stakes (15 min): automation, copilots, and the importance of the "human in the loop"

Frameworks for adaptability (15 min): evaluating trends vs. hype, choosing what to learn.

Habits for continued growth (10 min): sustainable learning routines that fit busy engineers.

Q&A and audience stories (5–10 min).

Key Takeaways

Adaptability: not a single tool, is the most valuable long-term skill.

How AI raises the bar for learning speed and breadth in data engineering. (Evolution from Stack Overflow to Chat GPT)

A practical framework for evaluating new tools without getting caught in shiny-object syndrome.

Habits and resources that make ongoing learning realistic and effective.

Adapt or Be Automated: Continuous Learning in the Age of AI and Data Engineering

In data engineering, the tools never stop changing. From on-premises ETL to cloud-native pipelines, from dashboards to AI-driven insights, the landscape evolves faster than most teams can keep up. But the data engineers who thrive aren’t the ones who know a single tool inside out; they’re the ones who adapt, learn, and apply new practices as the field transforms.

In this session, we’ll explore why adaptability is the most critical skill in the AI era. You’ll learn how to evaluate new technologies, when to embrace the latest innovations (like generative AI for pipeline automation), and when to stick with proven practices. Real-world stories from 25 years in the field will highlight how continuous learning turned potential failures into successful, future-ready data projects, and has kept me in a career that I love.

Agenda
The only constant: change in data engineering (5 min).
A short evolution tour (10 min): from DTS → SSIS → ADF → Databricks & Fabric.
How AI changes the stakes (15 min): automation, copilots, and the importance of the "human in the loop"
Frameworks for adaptability (15 min): evaluating trends vs. hype, choosing what to learn.
Habits for continued growth (10 min): sustainable learning routines that fit busy engineers.

Q&A and audience stories (5–10 min).

Key Takeaways
Adaptability: not a single tool, is the most valuable long-term skill.
How AI raises the bar for learning speed and breadth in data engineering. (Evolution from Stack Overflow to Chat GPT)
A practical framework for evaluating new tools without getting caught in shiny-object syndrome.
Habits and resources that make ongoing learning realistic and effective.

Chris Gambill

Founder | Gambill Data | Fractional Data Strategist and Leader

Knoxville, Tennessee, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top