Session
ETL Approaches When Using Fabric PySpark
Designing ETL architectures involves choosing between the use of various forms of code-first, low-code, and no-code options for data processing. So what can ETL processes look like in Fabric when focusing on PySpark?
In this session, I'll showcase what ETL can look like in PySpark. I'll showcase libraries that can assist with the ETL process, as well as demonstrate how Fabric performs with different approaches to dataframe and Delta table usage. By the end of this session, you should understand how PySpark can help your ETL work, whether using it exclusively or with other Fabric features.
Jared Kuehn
Data Platform MVP by day, Storyteller by night
Appleton, Wisconsin, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top