Ginger Grant
Principal and Founder of Desert Isle Group
Phoenix, Arizona, United States
Actions
Ginger Grant shines as a Data Platform MVP, offering top-notch consulting in the realms of advanced analytics, machine learning, data warehousing, and the intricacies of Microsoft Fabric. Not just a consultant, Ginger is a prolific writer, sharing her insights through articles as a regular columnist for Pure AI, books, and on her blog, DesertIsleSQL.com. Her mastery doesn’t stop there; as a Microsoft Certified Trainer (MCT), she imparts wisdom in various data platform areas, including Microsoft Fabric, Azure Synapse Analytics, Python, and Azure Machine Learning. Ginger’s blend of expertise and educational contributions makes her a beacon in the data community.
Links
Area of Expertise
Topics
How do I figure out if I should use a Lakehouse or a Data Warehouse
With Microsoft Fabric there are two distinct data structures which are very similar: the Lakehouse and the Warehouse. Despite their similarities in operational costs, SQL Endpoint access, usage of parquet files, data masking, TSQL querying, and other features typically associated with SQL Server, it raises the question: why choose one over the other? This session aims to explore the architectural considerations that guide the optimal selection for your organization. We will also review the structure showing why the performance is so much faster than Synapse Serverless.
This session will be crucial for those tasked with making informed data architecture decisions for their organizations as you will get the answer to your data questions.
Developing with Spark for Microsoft Fabric with Copilot
Developing solutions in Microsoft Fabric can include Apache Spark and in this environment that includes Copilot generated code to document, comment, debug and create error handling. We will show you how to incorporate Copilot to perform these task either within Fabric or in a tool you may be more comfortable using VS Code.
We will also talk about the unique methods that Fabric includes for improving developer productivity by utilizing drag and drop methods for creating common code patterns. We will also demonstrate how to use Data Wrangler to analyze and clean your code and add the spark dataframe modifications to your code.
We will review how to use this code in a data pipeline and monitor results. If you want to improve your Spark productivity in Fabric, you won’t want to miss this session.
Data Wrangling: Code Generation in Fabric without Copilot
If you were hoping to get some assistance with writing spark code and do not have an F64 SKU, this session is for you. We will review how to examine and modify your lakehouse code using Python even if you don't know how to write it using Data Wrangler. You won't want to miss this session to help you modify your data and include it as part of your ETL Pipeline.
Developing a AI solutions with your data
In this hands-on session you will learn how to provide answers using your data as the source by incorporating different AI elements. We will take a look at different methods for providing these answers including a knowledge management agent, chatbots, analytics agents or generative AI models to answer prompts about your data. We will review how each of these different elements work and how and why to generate them and select the appropriate tool for different data needs.
We will explore the different Azure components needed to create these systems and how you will need to structure your data to consume it. We will review the different features of Azure AI and how they can be combined to provide the required solutions.
During this session, we will explore the different generative AI models to determine which one best meets the needs of your organization. You will learn how to create a chatbot, and how it can be used in coordination with other models to create a solution.
Be sure to bring your laptop to see how you can create AI solutions during this session. We will walk through the code samples and Azure elements required to create these solutions. You will walk away from this session with the knowledge you need to implement an AI tool for your organization.
AI in Power BI
AI is changing the game in a big way, making things quicker and boosting the quality of results across various applications, and Power BI is no exception. In our upcoming session, we're diving into how Copilot can lend us a hand in writing DAX, designing visuals, and unlocking other cool features that make reports more accessible and cut down on the time it takes to whip up some awesome reporting solutions in Power BI. We're also going to explore how Copilot shines in the semantic model and the Fabric service, helping you churn out solutions faster that are a breeze for others to use and understand. You will want to learn how to develop complex elements simpler and friendlier than ever before.
Managing Power BI Development with CI/CD
This year has brought a number of changes to Power BI which provide new ways to create a continuous integration and continuous delivery (CI/CD) environment. During the session new tools and features are reviewed to help you create more effective DevOps Management in Power BI while demonstrating examples of how to incorporate source control. Organizing and managing workspaces is also an important part of the release lifecycle, which we will demonstrate as this can impact how releases are managed via pipelines. To better understand the options available within your organization, all of the licensing requirements for source control will be reviewed to understand what features are available in different versions.
Lakehouse or Warehouse : Which is the right choice in Microsoft Fabric
Microsoft Fabric introduced the Lakehouse and the Warehouse as two different data structures, but how different are they really? Both cost the same amount to run, both allow access via a SQL Endpoint, use parquet files, include data masking, TSQL querying and other features commonly associated with SQL Server. Why do you want to use one over another? In this session we will look at the architecture decisions which determine which is the best choice in your organization. We will also demonstrate how you can convert a dedicated pool from Azure Synapse DW into a lakehouse. If you want to consolidate your data from a Synapse solution to Microsoft Fabric you will find this session very helpful. We will provide you with reasons for why you might want to consolidate your data and when this architecture might not be the best choice for your environment. If you are looking to design a data architecture for your organization this session will help you make the right decisions going forward.
Implementing Self-Service reporting in Fabric
Creating an environment where users are able to create reports without having to worry about data modeling, how to create security or develop complex measures is a goal for many organizations. Setting up an environment which will make people be able to create reports while following the design settings of the organization will make for a more impactful Fabric implementation.
In this session we will review advanced theme development, structure and organization of workspaces, and other governance practices that will provide the required support needed for users to better be able to create and distribute reports.
Implementing a Self-Service Reporting Solution
Providing an environment where business users are able to develop their own reports is a goal of many companies. Providing this environment in Power BI takes more than just pointing users to a data model, as the data environment needs to provide an environment where users can utilize design standards, understand which measures to use so that they will be able to do their own analysis. We will look at all of the steps needed to create detailed themes, templates, and supportive model designs. We will also review what is required to use new tools such as Copilot for Power BI, Microsoft's implementation of ChatGPT and how it can be incorporated to insure people have the ability to generate the reports they want even if they are not Power BI fluent. Learn everything you need to include to make non-Power BI experts look good and create meaningful analysis.
Fabric for Business Data 101
In this full-day workshop, participants will dive into the foundational concepts of Microsoft Fabric—a powerful, unified analytics platform designed to transform how organizations handle their data. Led by industry experts Heidi Hasting and Ginger Grant, this workshop will equip attendees with essential knowledge and practical skills to harness the capabilities of Fabric for business data management, analytics, and decision-making.
Agenda
Introduction to Fabric
Introduction to Microsoft Fabric and Dataverse Integration
Data Modeling and Dataverse Tables
Power BI for Data Visualization
Automating Processes with Data Activator
Security and Compliance in Fabric
Administration and Licensing in Microsoft Fabric
Microsoft Fabric Administration is similar to Power BI but there are other elements that are new. It is important to understand what the settings are in order to maintain your Fabric environment and here you will find out what your options are. In this session we will review the preferences, items settings, tenant settings, audit logs, domains and capacity settings. While we are reviewing the capacity settings we will review the licensing necessary to be able to access all of the features in Fabric and how you can add capacity with both F and P SKUs. As so much Fabric administration is Workspace based, we will review workspace settings and how you can get a global view using the Rest API. You won’t want to miss this in-depth session to better understand how to manage and administer Microsoft Fabric.
Copiloting Fabric
In this session we will look at a number of different things that you can have Copilot do for you with Microsoft Fabric, including one that does not require an F64 Fabric License! We will review how Copilot can help you document and improve your code from DAX to Spark. You will see what kind of reports and ELT solutions that using AI with Copilot can create and how using it can improve productivity.
Creating an Organizational Data Store with Microsoft Fabric
If you were wondering what Microsoft Fabric is and how it can be used to organize and report on data, you will want to attend this session. Microsoft introduced Fabric as a new product in 2023, which incorporated a number of features from Power BI, Synapse, Azure Data Factory, Purview and Azure ML, and added new features as well. In this session I will describe what business needs Fabric was designed to solve and how to incorporate these changes into your environment. If you are currently using Power BI and wonder how these changes will impact your business, you will want to attend as you will learn how the changes impact Power BI. In the accompanying demonstrations you will see how you can use Microsoft Fabric to organize your data in OneLake, develop virtual databases, and connect the data to create reports in Power BI. We will also review the incorporation of ChatGPT, which Microsoft calls CoPilot, to generates reports for you, which you can release right after you create them or choose to incorporate those as a starting point to modify or create new Power BI reports.
Selecting the best data transformation strategy
While the technologies used for transforming data have changed, the goals have not. Companies need data from multiple sources combined to provide the single version of the truth. What tools can be used to accomplish this? Microsoft Fabric contains a number of different options Azure data factory/Azure Synapse integration, Spark and Power BI Data flows can all be used to transform and model data. Which one should you use? This session will examine the different elements in your environment which can be used to determine which solution would be the best fit based upon elements such as data types, maintenance, cost, and skill levels.
Implementing Data Science in Microsoft Fabric
Developing data science solutions in Microsoft Fabric uses a number of components to create machine learning models and easily incorporate data from OneLake into dataframes. This session will also review some time saving tools added to Fabric including Data Wrangler and the Fabric Runtime. We will take a look at how you can incorporate these tools with notebooks to create a machine learning model and implement it in Microsoft Fabric data pipelines. In this session we will walk through the data process starting with data exploration, developing a dataset, experimenting and evaluating algorithms, modeling and implementing the solution.
Direct Lake and Direct Query In Fabric
Direct Query has been the choice for situations when businesses wanted large data models and where data entered in a database was immediately available. Direct Lake provides a new method for achieving the goals previously only available in Direct Query. Direct Lake was introduced to Power BI this year, and it can also be used to analyze large data volumes without needing to import the data, while providing good performance to users at scale who want to access the data. In this session we will see how you can use large data volumes in Power BI and take a look at how well it performs and scales using Direct Lake. We will examine how One Lake, lakehouses, and SQL Endpoints can be used together to provide optimal performance with Direct Lake. To see if this solution will work in your data environment, you will receive an understanding of the licensing required to implement this solution so that you can ensure that you have the knowledge you need.
Implementing Source control in Fabric
Creating a continuous integration and continuous delivery (CI/CD) in Power BI is possible now with PBIP files. In this session we will review how to implement it. We will walk through examples to provide attendees the information they need to add Power BI to CI/CD process used for other coding environments. See how you can automate assigning workspaces, adding users and managing deployments, and creating and editing pipelines.
Data Ingestion in Fabric
In this session we will review 3 different methods for ingesting data, using the copy command, using Fabric's Dataflow2 and traditional Dataflows and discuss the reasons for using each method. The copy command has been updated and Dataflow2 is part of Fabric. We will review how these compare to traditional Synapse Ingestion dataflows and when it makes sense to use each.
Building an Fabric Environment for Power BI
Microsoft Fabric contains a number of different tools which you can use to create a data lake house environment perfect for Power BI. Learn the steps that are involved to develop and monitor data transformation pipelines, create a data lake house environment, and produce a final data store. Depending on the different data elements within the organization, the final data store could be a lakehouse, SQL Endpoint, data warehouse or Power BI model. We will review the different use cases so that users will understand which combination is best given properties of data and skillsets in your environment.
Incorporating Fabric One Lake in your data environment
With the introduction of Fabric One Lake in your data environment there is a lot of confusion regarding how it works. How do you make it work with GDPR if you need to ensure that data is not moved to another country? How do you implement the organizational structure when workspaces create folders within the lake? What kind of security can you employ with sensitive data to ensure that access is limited with One Lake? What kind of management needs to occur to the Parquet files which are automatically created? How do you use Power BI with One Lake? What is Direct Lake.This session provide answers to those questions and show you how Fabric One lake works with current and new resources.
Implementing Data Analytics with Microsoft Fabric
Microsoft Fabric has incorporated a number of different elements into one environment for data lakes with OneLake, data warehousing, machine learning, data transformation, and reporting. In this session we will look at using OneLake for data lakehouses and demonstrating how OneLake can be used instead of a database highlighting the performance improvements which have been made. Demonstrations will show the ability to use Fabric to connect to data from different sources and from different data lakes to ensure compliance with GDPR or other location-based regulations with OneLake. We will examine what a Fabric data lakehouse entails and how the data is integrated into Power BI for reporting. Changes in Spark clusters and Delta file implementation are examined to ensure you will understand how these improvements will impact your data movement pipelines and machine learning tasks. Data science workflows are reviewed to provide a good explanation of not only how to use them, but also the best practices for integrating these objects into pipelines. We will investigate how using the new Dataflow2 data transformations can speed up development and when they would be a good implementation choice. We will also examine the different ways Copilot is incorporated within Fabric so that your organization can be on the cutting edge of AI development with ChatGPT technology. This session will show you how to leverage Fabric’s different components for data driven decision making within your organization.
Data Science in Fabric
In this session we are going to look at creating a machine learning model and optimizing algorithm selection with Microsoft Fabric. After reviewing how that works, attendees will learn how they can either incorporate a low code solution to pick the best algorithm for the task. We will be examining the impact of Fabric Spark pools to those who may be familiar with Synapse or Databricks and how these changes improve the Spark development process. We will take a look at everything from creating to implementation incorporating new Fabric functionality.
Architecting a data solution in Fabric
With the introduction of Microsoft Fabric, the methods used for creating a data lake and using it in Power BI have changed. In this session we will take a look at how you can use the different components of Fabric to Architect a solution using the features of One lake with a Fabric workload. The session focuses on the differences with Fabric development to the methods you may have created a solution using Synapse and Power BI.
Power BI Effective Management and Deployment Strategies Revealed
Project Management features in Power BI have changed quite a bit, and the new changes provide better opportunities for creating, managing and deploying solutions within your organization. Learn how to take advantage of the features in Power BI to improve collaboration, decrease model proliferation, and provide for a more robust set of deployment techniques.
Performance tuning Power BI reports
There are many reasons your Power BI reports may be running slowly. You may have a lot of data or inefficient DAX Calculations or a sub-optimal data model. Learn how to analyze the underlying issues with your reports and determine which solution you can use to improve the speed and scalability of your reports. We will examine solutions for import, direct query and live connections. No matter how you access your data, this session will provide you with some solutions that you can use on the reports in your environment.
Improving Power BI Performance with Data Modeling
Data modeling in Power BI is key to a successful implementation. In this session we will explore different techniques for improving the speed of Power BI by examining solutions to different common performance issues and look at how they can be improved with different modeling techniques. Attendees will learn how they can improve Power BI’s performance through the use of different table types, directional filtering and calculation groups. We will explore how different types of composite modeling can be used to improve the data environment and how these techniques can improve your Power BI apps.
How to pick the best Data transformation strategy
While the technologies used for transforming data have changed, the goals have not. Companies need data from multiple sources combined to provide the single version of the truth. What tools can be used to accomplish this? Azure data factory/Azure Synapse integration, SQL, Spark and Power BI Data flows can all be used to transform and model data. Which one should you use? This session will examine the different elements in your environment which can be used to determine which solution would be the best fit based upon elements such as data types, maintenance, cost, and skill levels.
Developing a Self-Service Power BI report development environment
Power BI is designed to be a reporting tool for the masses as report visualizations can be created with a few clicks. While it looks easy enough there several elements including the data model and DAX Measures which need to be created to make easy report creation possible. In this session we are going to explore the practices and methods which will make it possible for non-data professionals to develop their own reports in Power BI using models and templates created for that purpose. Attendees will learn what needs to be built to create a Power BI environment designed for end user report creation. Learning the elements which need to be included in the data model and what needs to be included in the report templates will provide a foundation for self-service development.
Data Model Design and Optimization for Fabric
If you have designed a sematic model for a data warehouse, you know most of what you need to develop a semantic model for Power BI, but there are some important differences to improve the overall performance and maintenance within Fabric. In this session we will review the differences and include design patterns for Row Level Security, data aggregations, and DAX Development. We will also review different design patterns for Composite Models, Direct Query, Import and Dual modes and how to organize them for optimal Fabric environments.
Moving away from One Model to rule them all
Often times a single data warehouse is meant to be the one solution to provide data to an organization, a method some may describe as the Lord of the Rings Model as there is one model to rule them all. There are a number of different reasons why this approach is not suited to most organizations. Some of the reasons are technical, others include an inability for one model to meet the needs of different groups of users or data types. In this session we will explore how to build a data environment which provides flexibility to different groups of users and examples of how to implement different types of technology to provide a broader set of solutions using Azure Synapse, Data lakes and Power BI.
Feedback Link - https://sqlb.it/?7013
Incorporating Data Lakes into Power BI
More and more companies are using data lakes as a central storage location for all things data. And because once you have data, people want to generate reports with it, this session will explore a number of different ways to integrate Azure Data Lake Gen 2 storage into a Power BI. We will review 3 different methods for integrating data from Azure Data Lake Gen 2 into Power BI. Learn how to incorporate data from Azure Data Lake storage with dataflows, using the Common Data Model, and directly accessing data into a Power BI model. In addition to seeing how to do this in the demos during this session, attendees will learn which method will be best suited for their environment.
Power BI DAX in a Day
If you have started working with DAX and would like to take your code to the next level, this session is for you. In this all day session, you will learn not only how to better write and understand DAX, but to use tools which can assist to write DAX faster and create better performing measures. This session will provide the ability to learn not only How but Why to use different DAX elements to be able to quickly develop complex measures.
The course will cover the following Topics
• Evaluation Contexts
• Improving DAX with Third-Party Tools and Best Practices
• Variable Uses and tricks
• Iterators
• Deep Dive into CALCULATE
• Advanced Time Intelligence
• Improving DAX Performance
Demonstrations and follow along exercises will provide the opportunity for you to not only hear about the concepts but see how they are implemented in code you can refer to later.
Data Lake Mangement with Azure Synapse and Delta Lake
Data lake management is required to ensure that the information stored can be readily analyzed. Spark Delta Lake moves the structure of a data lake closer to a database and is definitely something you are going to want to implement in Azure Synapse Analytics. In this session you will learn how to apply Spark Delta Lake to improve data quality, query speed, and review backups of files stored in the data lake. Implementing these strategies can improve analysis capabilities as data can be analyzed more like a data warehouse, without all of the transformation and storage costs. Using Delta Lake, attendees will see how to ensure the format is known when it was added, rather than finding out years later that no one is able to determine what is in the files. As data lakes can be queried like a database, we will examine how to speed analysis with Delta Lake by indexing flat files and consolidating the data to improve query performance. Need to look at what the data looked like prior to a change being made to the data in the lake? We will look at ways to travel back in time to review the data. This session will provide the skills needed to improve your data lake management with Delta Lake making it even easier to analyze data in Azure Synapse.
Diving into Dax: 7 things that will help you write better DAX and understand why
If you use Power BI you have written DAX. DAX can be complicated and understanding better how the DAX Engine works can help you write better performing DAX. There are a couple of third party tools which can help, so we will review Bravo, a relatively new free tool you may not know about and DAX Studio. We will use those tools when we review DAX Formatting with Bravo, context transition, different filtering methods and why you would use them, Calculated tables, USERELATIONSHIP() tricks, when to use VALUES, and validating performance with DaxStudio. After this session you will be able to incorporate some useful tools, write better performing DAX, measure the performance and better understand the DAX Engine in Power BI.
Tools and methods for Machine Learning
In this session we will be reviewing a number of different ways to solve problems with machine learning to better understand what is required to be able to problem solve with Machine Learning. As part of this discussion we will focus on the data requirements for machine learning, to better understand the kinds of data needed for analysis.
We will also review several different tools for creating machine learning solutions in Azure, including Azure ML and Synapse.
No Machine Learning experience required
Implementing a Self-Service Power BI Solution
Providing an environment where business users are able to develop their own reports is a goal of many companies. Providing this environment in Power BI takes more than just pointing users to a data model, as the data environment needs to provide an environment where users can utilize design standards, understand which measures to use so that they will be able to do their own analysis. We will look at all of the steps needed to create detailed themes, templates, and supportive model designs. Learn everything you need to include to make non-Power BI experts look good and create meaningful analysis.
Power BI Data Governance
Developing a Power BI data governance environment combines technical elements with managerial skills. To effectively manage a Power BI Environment, you will need to develop and incorporate strategies for licensing, data security, source control, report distribution, consistent development standards, and model management. In this session we will review what technical elements you need to implement and the governance practices needed to ensure you have a robust scalable Power BI Environment.
Data Lake or Data Warehouse? Which one makes sense?
In this session we will explore data lakes and how you can use them as a data warehouse. We will also explore creating a traditional Data warehouse using Dedicated Pools in Azure Synapse and review which one makes sense given different environments.
Data storage and Usage in Microsoft Fabric
Microsoft Fabric has different storage and exploration for data which were not available in Synapse or Power BI. One Lake, Data Lakehouse, and SQL Endpoints are three different ways of organizing data that may provide a significant benefit to your environment. In this session we will explore these different storage options and the use cases for each. We will also review a new method for exploring data stored in One Lake, Data Wrangler. The demos will provide examples and since Microsoft Fabric is still in preview, you will be able to work through them later yourself.
Data Engineering in Microsoft Fabric
Microsoft Fabric includes a lot of different elements, including Data Engineering. Data Engineering includes notebooks, pipelines, lakehouses and data pipelines which you will have a better understanding of how they work together within fabric. In the demos for this session you will see what functionality these elements provide and how you can use them in your data solutions.
Introduction to Microsoft Fabric
Microsoft Fabric was introduced in May of 2023 and contains elements of Power BI, Synapse, and Machine Learning. In this session we will review the different components and focus on what elements can be used to architect a data solution, how fabric differs from other previous technologies and how it doesn't and why you would want to use it in your environment.
Developing AI solutions with your data
In this hands-on session you will learn how to provide answers using your data as the source by incorporating different AI elements. We will take a look at different methods for providing these answers including a knowledge management agent, chatbots, analytics agents or generative AI models to answer prompts about your data. We will review how each of these different elements work and how and why to generate them and select the appropriate tool for different data needs.
We will explore the different Azure components needed to create these systems and how you will need to structure your data to consume it. We will review the different features of Azure AI and how they can be combined to provide the required solutions.
During this session, we will explore the different generative AI models to determine which one best meets the needs of your organization. You will learn how to create a chatbot, and how it can be used in coordination with other models to create a solution.
Be sure to bring your laptop to see how you can create AI solutions during this session. We will walk through the code samples and Azure elements required to create these solutions. You will walk away from this session with the knowledge you need to implement an AI tool for your organization.
Learning Goals
1. Learn to Utilize AI for Data-Driven Answers: Participants will be educated on how to leverage their own data using various AI elements such as knowledge management agents, chatbots, analytics agents, and generative AI models. The focus will be on different methods to extract answers from data, understanding which AI tool suits a particular data need.
2. Explore Azure AI Components and Data Structuring: The session aims to provide insight into the Azure ecosystem, particularly the AI components available and how they can be orchestrated to deliver solutions. Attendees will learn how to structure their data effectively for consumption by these AI models, ensuring efficient and practical implementation.
3. Hands-On Experience in AI Solution Creation: Attendees are encouraged to bring their laptops for a practical experience, where they will be guided through code samples and the usage of Azure elements necessary for creating AI solutions. The goal is to equip participants with the practical skills and knowledge to develop and implement AI tools within their organizations.
Fabric Data Security in Lakehouses and Data Warehouse
When data moves from a database to a lakehouse or a warehouse, how to you secure the data? How do you provide access to it? Can you implement row level security on the the lakehouse? Can you mask the data and have it the data appear as masked inside of a Semantic Model used for Power BI reporting? In this session, you will learn how you can implement security on lakehouses and data warehouses inside of Fabric and which one you should implement with your data. You will learn how to restrict access to data and how to implement object and row level security at the database level. See how you can implement similar user security to what you may have deployed in SQL Server inside of Fabric Endpoints in SQL Server Managment Studio
SQL Saturday Jacksonville #1041 Sessionize Event
SQLBits 2023 - General Sessions Sessionize Event
Microsoft Azure + AI Conference Fall 2022 Sessionize Event
Data Toboggan - Cool Runnings 2022 Sessionize Event
SQL & Azure SQL Conference Spring 2022 Sessionize Event
SQLBits 2022 Sessionize Event
Data.Toboggan 2022 Sessionize Event
Live! 360 Orlando 2021 Sessionize Event
PASS Data Community Summit 2021 Sessionize Event
Global AI Back Together - Cleveland 2021 Sessionize Event
The North American Collaboration Summit 2021 Sessionize Event
Virtual 2021 Data.SQL.Saturday.LA Sessionize Event
Data.Toboggan - Cool Runnings Sessionize Event
Data Céilí 2021 Sessionize Event
Cloud Lunch and Learn Marathon 2021 Sessionize Event
Ginger Grant
Principal and Founder of Desert Isle Group
Phoenix, Arizona, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top