© Mapbox, © OpenStreetMap

Speaker

Sunil Sabat

Sunil Sabat

Microsoft, Principal Program Manager, Azure Data

San Francisco, California, United States

Actions

https://www.linkedin.com/in/sunilsabat/
Sunil has worked in various roles in the hardware and software industry, with expertise in a broad spectrum of technologies, including all cloud platforms, enterprise solutions, and open source. He is currently a Principal Program Manager at Microsoft Azure Data group focused on customer success and adoption of the products. Sunil holds advanced degrees in Computer Engineering and Computer Science, as well as an MBA in Marketing and Finance. He has also earned certifications from Microsoft, AWS, Google, Nvidia, Salesforce, IBM, and CompTIA. He has a Software Security Certificate from Stanford University. Sunil has presented at several conferences like IEEE, CompEuro, co-authored an IBM Software Redbook, and spoken at IBM IOD and Intel IPDT events. He has also participated in the DeveloperWeek 2024 Hackathon on Generative AI. He has advised courses in Product Innovation and Business Analytics at the Leavey School of Business. Sunil had passed DP-600 in 2024. He recently passed DP-700 examination also.

Area of Expertise

  • Business & Management
  • Finance & Banking
  • Health & Medical
  • Information & Communications Technology
  • Manufacturing & Industrial Materials

Topics

  • Data Engineering
  • ETL
  • ELT
  • Fabric
  • Data Factory
  • Spark
  • Data Warehousing
  • Fabric Real-Time Intelligence

Let us build AI and BI products with Fabric

Do you know Fabric now comes up with workloads and features that enables you develop AI and BI solution securely and rapidly? In this tutorial, let us build practical BI and AI product in a day.

In first session, given real business analytics requirements, we will build PowerBI semantic model from scratch with data pipelines, data engineering and data warehouse workloads.

In second session, we will define an AI product starting with Agentic AI requirements for customer service. We will decide on LLM model, training data, AI Foundry APIs for inference, prompt output being aided by RAG seeding. For this session, we will use Azure AI services. We will orchestrate Fabric data pipelines with notebook, copy and web activities to achieve our goal.

By the end of the session, you will be empowered with knowhow of how Fabric brings all tools, APIs and repositories together for building AI/BI solutions.

Microsoft Fabric Design Patterns: Choosing the Right Approach

In this session, you'll discover a variety of design patterns to build robust data architectures using Microsoft Fabric. We’ll explore best practices for implementing Bronze, Silver, and Gold layers, designing an effective semantic model, and determining the optimal number of workspaces. Additionally, we’ll discuss storage options—when to choose lakehouses, data warehouses, or SQL databases—and delve into integration techniques such as data pipelines, Dataflow Gen2, and copy jobs. Join us to gain insights into selecting the right strategy for your data needs!

Best practices and design patterns for Fabric Data Factory

In this session, we will share learnings from working with customers and partners. This session will cover best practices and design patterns for implementing end-to-end data integration solutions. You will learn tips and tricks and gotchas when designing data ingestion, data transformation, replication and orchestration work using Fabric Data Factory.

Learn how you can be successful with Data Factory pipelines, dataflows gen 2, copy job, mirroring, orchestration and more. Additionally, we will cover top of mind dataflow failures and folding mistakes that cause data flow performance delays and how to correct them. Join us in this session, as we share learnings, best practices, and design patterns that will enable you to be successful with Fabric Data Factory.

Real SMB success journey with Fabric Unified Analytics Platform

Session objective: The session aims to teach how SMB customers can use Fabric workloads in painless and result oriented manner to achieve their data analytics and visualization goals.

Session content: The session covers common patterns and practices, tips and techniques, and different workloads from Data Factory to Data Engineering and PowerBI. This is derived from recent success stories on the field. SMB customers now have Fabric like unified analytics platform to implement and realize rapid success. For example, we will cover how Data Factory pipeline plays as key enabler in organizing end to end data flow giving peace of mind and data management to achieve pertinent goal.

Session outcome: The session helps SMB customers to land their data in Fabric and build workflows in painless ways in their first attempt.

Mastering Metadata-Driven Lakehouse Architecture with Data Factory: Real-World Insights

Implementing a lakehouse architecture often presents key challenges, such as managing the complexity of data ingestion from diverse sources, maintaining operational excellence, ensuring robust governance and security controls, and enabling monitoring and auditing across all medallion layers.

With Microsoft Fabric’s unified analytics SaaS solution, these challenges can be significantly mitigated. Fabric accelerates and scales lakehouse implementation using a metadata-driven framework that seamlessly integrates engineering, analytics, and data science functions. This approach promotes interoperability, automation, security, and governance to deliver reliable, high-quality outcomes.

In this session, we’ll explore a metadata-driven methodology for lakehouse implementation, drawing from real-world experiences and field-tested best practices. The approach leverages a modular, reusable, and standardized framework for:
- Metadata-driven data ingestion
- Alert notifications
- Metrics reporting
- Data validation and profiling
- PII data anonymization and governance
- End to End Auditing across all layers for reprocessing
- Security patterns implementation

This framework not only addresses the complexities of data management but also delivers a cost-effective, efficient solution for businesses, enhancing the productivity of both data engineers and analysts.
Join us for this deep dive to learn how to overcome common lakehouse challenges while unlocking scalable and consistent results using Microsoft Fabric.

Fabric Airflow with dbt - Ramp up your ETL, ELT and Data Quality tasks at speed with ease,

Fabric Airflow and dbt thrive to help you deliver reliable data using common interface (Python, SQL and CLI) in collaborative and ecosystem friendly manner. Airflow helps orchestrate jobs that extract data, load it into a warehouse, and handle AI/machine-learning processes. dbt enables SQL transformation of the data that has already landed in the warehouse.

Let’s learn how Fabric Airflow can be used to orchestrate dbt jobs, meaning you can use Airflow to schedule and manage dbt projects, ensuring that your data transformation models are executed in the correct order and with the appropriate dependencies. Not only that, but you also get your data quality checks performed as part of DAG (direct acyclic graph) code.

In this session, we will cover Fabric Airflow and dbt architecture. We will differentiate on how it differs from open source and other products. With Fabric focus, Airflow DAG operators and dbt models accelerate your ETL, ELT tasks along with data quality completion at speed with super easiness of vivid SaaS experience.

Sunil Sabat

Microsoft, Principal Program Manager, Azure Data

San Francisco, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top