Session
Config-Driven Data Engineering in Microsoft Fabric
Many Fabric implementations start with a handful of PySpark notebooks and quickly face the same challenge: how to manage, reuse, and orchestrate them at scale. Without structure, integrations become fragile, inconsistent, and difficult to promote across environments. This session explores a practical architecture that turns ad-hoc notebooks into reusable, auditable, and production-ready assets.
We’ll walk through the principles behind configuration-based orchestration, modular notebook patterns, and dynamic pipeline generation within Fabric. Learn how declarative metadata can drive transformations, handle slowly changing dimensions, manage schema evolution, and enable environment layering for dev/test/prod deployments while still aligning with Fabric’s native Pipelines, Lakehouses, and Delta format.
The session demonstrates how these principles translate into cleaner operations: faster onboarding for new datasets, simpler maintenance, and transparent governance. By standardizing how notebooks interpret configuration, teams can deliver new integrations in days instead of weeks while preserving full auditability and control.
Attendees will leave with actionable patterns they can apply immediately to modernize their own Fabric environments and establish the foundation for a scalable, metadata-driven integration framework.
Pierre LaFromboise
Covenant Technology Partners - Chief Data & Analytics Officer
St. Louis, Missouri, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top