Most Active Speaker

Jens Vestergaard

Jens Vestergaard

BI Dude

Copenhagen, Denmark

Jens is hands-on partner at CatMan Solution, driving the BI as a Service team as well as custom build & deliver projects.
The tech stack is wide, as Jens has a couple of decades in the industry, focusing on mainly Business Intelligence tools, but is also keen on Powershell, C# and Azure in general.
Jens is 7 years in a row Microsoft Data Platform MVP and a frequent speaker at both local and international events.

Awards

  • Most Active Speaker 2023

Area of Expertise

  • Information & Communications Technology

Topics

  • Microsoft Power BI
  • Microsoft Azure
  • Azure DevOps
  • Azure
  • Azure Data Factory
  • Azure SQL Database
  • Azure Functions
  • Azure Data Lake
  • Azure Data & AI
  • Azure Data Platform
  • Microsoft Fabric
  • Fabric
  • Azure Synapse
  • Azure Synapse Analytics
  • Azure Synapse Analytics (formerly Azure SQL DW)
  • Azure Synapse SQL Serverless
  • Azure SQL Synapse
  • Synapse
  • Azure Service Fabric
  • Azure Logic Apps
  • Power Automate
  • Microsoft Power Automate
  • Power Platform
  • Microsoft Power platform
  • Power Apps
  • Power Query

Streamlining your data lake workflows with Azure Event Grid

In this session we'll cover how Azure Event Grid can be used to unify and automate your data loading workflows, including how messages can be utilized to communicate changes in state in the flow of data.
Azure Event Grid can for instance be used to monitor changes in state within the layers of a data lake and trigger downstream processing tasks. For example, when new data is added to the raw data layer, Event Grid can be configured to send a message indicating that new data has been added. This message can then be used to trigger the next stage of the data processing pipeline, such as data cleaning or enrichment.
Messages can also be used to provide additional context and metadata about the changes in state within the data lake. For example, a message can include information about the type of data that was added, the timestamp of the data, and the source of the data.
By utilizing messages to communicate changes in state within the data lake, Azure Event Grid enables a more efficient and streamlined data processing pipeline. Data loading workflows can be automated and triggered in real-time, reducing manual intervention and improving overall efficiency.
Within an Event Grid you can configure various topics, so that you can control downstream targeted audiences, who then again can filter on said topic, if they need only react to a subset of the messages.

We will look at how to setup and configure an Event Grid, go over how to send messages to different topics, we will cover configuring an Azure Function to react to specific messages as well as logging all messages into a data lake.

By attending this session, you will get an overall insight into how Event Grids work and see some applied use cases.

Introducing KQL

Whether you are taking your first steps into the plethora of Azure services or you are a proven warrior, there is one service that is there to guide you and your application(s) to the next level. Azure Monitor allows you to collect, analyze, and act on telemetry from both your cloud and on-premises environment(s) and application(s).

After introducing Azure Monitor service and its basic moving parts, the focus of this session will teach you how to get started in querying your application logging data stored within. In order to do this efficiently, we will introduce the Kusto Query Language (KQL).

This language is designed to be easy to read and author. However, getting started with any new language is almost always easier by example, so that's where we will begin.

After getting a feel for KQL, we will dive into some more advanced scenarios, create a minor dashboard on a couple of services and wrap up by setting alerts based on results from KQL queries.

Code repos and automated deployment

This session is for the fearless, gutsy, and heroic data professional who spends most of their work/wake hours fixing the burning platform, in production. While being skilled to fix the train while it's running solves a lot of business problems, it's almost a certainty that it leaves you exhausted or maybe even burned out. You perhaps even went for that new position in that other company, to escape your tedious deployment responsibilities. Here is where a code repository and automated deployments can help save your job in more than one way.

Even if the scenario is not that grim, you could leverage code repos for many other beneficial habits; Do you share your code/artifacts with peers to discuss? In a secure and clever way? How do you track what requirement was release when?
The impact of using a setup such as Azure DevOps to host your requirements and the code itself is invaluable, in my opinion - and I would like to take this opportunity to show you why.

We will be following an artifact (SQL Database, Az Function or something entirely diff) from inception to actual running service/code through the use of a number of Work Items in Azure DevOps in collaboration with an Azure Repo and finally packaged and deployed using Azure Pipeline/Release.

Azure Data Integration Bootcamp

In this session we will be diving into most of the major moving parts of an automated enterprise BI solution, as per Microsoft reference architecture (Enterprise business intelligence - Azure Reference Architectures | Microsoft Docs). Azure Active Directory, Blob storage, Azure Monitor, Azure Synapse, Azure Data Factory, Azure Analysis Services and Power BI serving as key pillars in building a solid custom framework for automated data ingestion and analysis. Learn how to setup each of these services, how they interact and how to benefit from built in automation in Azure.
Leading by example, we will be tracing data from various sources on its journey through the Azure services, configuring them configure as we go along.

The day is comprised of a number of parts, one for each major aspect of the architecture.

Introduction (1 hr)

Break - 15 min

Sources (1 hr)
In this section, we will go over some of the options available in Azure, and how we would go about sourcing our data.
The most common sources we will cover are:
- Azure SQL Database
- (S)FTP folder(s)
- Blob storage
- WebService

Break - 15 min

Ingestion (2 hrs)
The staging area will be covering the landing zone in our Azure Data Lake Storage Gen2, discussing the various options we have for supporting Delta lake etc. A huge part of this will also focus on setting up Azure Synapse Analytics, so that we can examine which offering is the most appropriate for our scenario.

Break - 1 hr - Lunch

Automation (1 hrs)
The automation part in Azure concentrates about Azure Data Factory/Azure Synapse pipelines and the number of ways the platform offers to invoke pipelines/notebooks/data flows.

Break - 15 min

Model (1 hr)
In part we will be creating a Power BI model, where we will investigate how to configure partitions with incremental data refresh and what options we have for deployment.

Break - 15 min

Wrapping it all up (1 hr)
Rounding off the day by going over all of the major moving parts and how they interact. We will also briefly touch some of the pitfalls to avoid.

Prerequisites:
A basic knowledge about Azure and some data platform technology is helpful

Abusing a Tabular Model - For fun

At CatMan Solution we have about 70 clients running one of two versions of a tabular model. Deploying the right model to the right client takes a well orchestrated deployment pipeline as well as some well honed C# skills to create a set of Azure Functions that operate on the Tabular Object Model (TOM) as well as manipulating the deployed databased via Analysis Services Management Objects (AMO) once it's deployed and live.

This session will walk you though the options and choices we have to make when deploying a generic model to fit customized needs.

Using Azure DevOps as a starting point for orchestrating the deployment process, we will walk through the moving parts needed to ensure that the client has a tabular model deployed and ready to go, with respect the following options:
- Artifact
- Culture
- Connections
- Expressions
- Roles
- Custom Translations
- Async Processing
- Build Version

The artifacts are taken from a git repo in Azure DevOps and all sensitive information is stored in Azure KeyVault.

Attending this session will give you insights into a lot of options for deploying a highly configurable tabular model.

Power BI Live Data sets, Monitoring your key metrics

In this session we will explore options in PowerBI to stream real-time data to the service.
Differences between pushing, streaming and PubNub streaming will be explained and we will dive deep into each of the three methods.
Join this session so learn how to get live data into your PowerBI service.
The session will be covering basic entry to best practices.

Model Deployment

In this session, we will cover a variety of ways to deploy models both in pbix files as well as in Visual Studio projects. As there are a good number of ways to do this manually as well as automated, we will cover some of the more prevalent ones. The following methods will be covered:

- Visual Studio (manual)
- Tabular Editor (manual & automated)
- Deployment Wizard (manual & automated)
- Azure DevOps (automated) through
○ Powershell
○ TabularEditor
○ Marketplace component(s)

Ingesting Data w/ Power BI

In this session I will be demonstrating how easy Power BI lets you ingest almost anything; from simple files to complex multi-file scenarios. The Power BI Desktop application lets you solve 80% of the challenges you have with data via the GUI, and the other 20% we will deal with using the Advanced Editor. We will spend time in both accordingly.

In detail we will be looking at these topics:
Straight up file (csv, xlsx)
Scraping Web Page Data (html)
Header/Footer issues
Variable number of columns
Multiple File Formats (think historical changes)
Binding Multiple Imports into a single table
Crude Error handling

Attending this session you'll learn the basics of Power BI Desktop, as well as some neat tricks to get through the more complex scenarios.

Additionally I will demonstrate how to deploy your home grown model into Azure Analysis Services.

DevOps for BI

If you are releasing database changes, new reports, cubes or SSIS packages on a regular basis, you've probably offered up your share of blood, toil, tears and sweat on getting them delivered into production in working condition.
DevOps is a way to bridge the gap between developers and IT professionals and for that we need to address the toolchain to support the practices. Microsoft offers a set of tools that'll help you on your journey towards the end goal: Maximize predictability, efficiency, security and maintainability of operational processes.

We will in detail be looking at:

Agile Development Frame of Mind
Visual Studio Online (tool)
Feature/PBI/WI (concept)
Team Foundation Server
Code Branching (concept)
Build Agents (tool)
PowerShell
Microsoft's Glue (tool)

ABC's of the Power BI REST API

In this session we will be looking into managing our Power BI content using only the Power BI REST API. While the Power BI REST API is extensive, we will be limiting this session on the following sections of interest: Dashboards, Datasets, Reports Groups.
How many Dashboards are there in a workspace? Which Datasource is this Dataset configured to use? Who is allowed to see this Report? Can I take ownership of this Dataset? How do I resfresh my Dataset? Is my Dataset refreshing on a schedule? ... Those are just some of the questions we will find the answers to.

Examples will be provided in Powershell, which may require minute skills up front. Not to worry, a quick intro will be provided as well.

Attending this session will make you familiar with the Power BI REST API and provide you with guidance on how to manage the most common tasks in Power BI.

Notebooks in Microsoft Fabric

The session provides a comprehensive overview of PySpark Notebooks in Fabric, covering basic to intermediate concepts. Participants learn about PySpark. They explore RDDs, DataFrames, and their operations, including loading/saving data, filtering, aggregating, and sorting.

The session focuses on PySpark SQL, enabling participants to execute SQL queries on DataFrames, create temporary views, and perform advanced operations like joins and subqueries. .

By the end of the session, participants are equipped with the necessary skills to efficiently process and analyze large-scale data, leveraging PySpark Notebooks for data-driven tasks.

Using Azure EventGrid w/ Fabric Notebooks

In this session we'll cover how Azure Event Grid can be used to unify and automate your data loading workflows using Fabric Notebooks, including how messages can be utilized to communicate changes in state in the flow of data.
Azure Event Grid can for instance be used to monitor changes in the layers of a data lake and trigger downstream processing tasks, handle logging, telemetry and much more.
By utilizing messages to communicate actions within the workflows in the data lake, Azure Event Grid enables a more efficient and streamlined data processing pipeline. Data loading workflows can be automated and triggered in real-time, reducing manual intervention and improving overall efficiency.

In the context of a Fabric Notebook, we will cover the steps needed to configure and setup the "backend Azure stuff" as well as configuring the workspace to enable the link to Azure Event Grid. Once that is configured, we will explore some of your options given this capability.

In specific, we will look at how to use Azure Event Grid for
- Logging data processing events
- Logging telemetry
- Logging sample data, data statistics etc. (leveraging features from spark)

Attending this session will leave you with an introduction to Azure EventGrid and the message structure. You will also learn how to utilize this to create a framework for automating data processing in Fabric Notebooks as well as reporting statistics on top of the flows of data in you workspace.

Join me to learn about scalable automation in Microsoft Fabric using Axure EventGrid.

Join me to learn about scalable automation in Microsoft Fabric using Axure EventGrid. In this session we'll cover how Azure Event Grid can be used to unify and automate your data loading workflows using Fabric Notebooks, including how messages can be utilized to communicate changes in state in the flow of data.

Cloud Technology Townhall Tallinn 2024 Sessionize Event

February 2024 Tallinn, Estonia

DATA BASH '23 Sessionize Event

November 2023

SQLBits 2023 - General Sessions Sessionize Event

March 2023 Newport, United Kingdom

SQLBits 2023 - Full day training sessions Sessionize Event

March 2023 Newport, United Kingdom

Data Insight Summit Sessionize Event

September 2022 Chicago, Illinois, United States

Power BI Summit Sessionize Event

April 2021

datasaturdays.com Pordenone 2021 #0001 Sessionize Event

February 2021

Virtual Scottish Summit 2021 Sessionize Event

February 2021

Global Power Platform Bootcamp 2021 - Italy Sessionize Event

February 2021

Jens Vestergaard

BI Dude

Copenhagen, Denmark

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top