© Mapbox, © OpenStreetMap

Most Active Speaker

Brian Bønk

Brian Bønk

Principal Architect, Data & Analytics

Køge, Denmark

Brian has worked with SQL Server for more than two decades - varying projects on both size and complexity. Now he's working to support the Data & Analytics team at Fellowmind.

Honored with MVP on Data Platform from 2023.

Currently given the honor of being Microsoft Recognized Fasttrack Solution Architect since 2022.

Brian loves data and is always trying to glue the business and tech together using his knowledge and experience.

He is always open to meet new people and help them get better at their job or task.


Area of Expertise

  • Information & Communications Technology
  • Health & Medical
  • Transports & Logistics
  • Government, Social Sector & Education
  • Business & Management


  • Kusto Query Language
  • SQL Sever
  • Azure Data Explorer
  • Microsoft Fabric
  • Fabric Real-Time Analytics
  • Fabric Data Warehouse

The second visualization tool from Microsoft - is it actually better than PowerBI?

When working with reporting and analytics, the end user sometimes has demands for very frequent updates and expects almost zero latency.

With the release of Real-Time dashboards in Fabric we now have the ability to deliver to these needs.

Join me on a trip into the next-gen visualization tool from Microsoft which maybe can be a better choice than Power BI.

This level 100-200 session is meant to the developer who wants to be on top of the game in analytics and visualization and get to know the new Real-Time dashboards feature in Microsoft Fabric.

We will start of with a short introduction to the Real-Time analytics area if Fabric and quickly dive into the dashboard feature of the service.
With near-real-time insights to data arriving in the database, features like cross filtering and easy user interaction we end with a fully working dashboard and get to see the live data streaming in and made available to the end user.

- KQL script language
- Eventstream
- Real-Time dashboards
- Live reporting

Streamevent service 101

Want to learn how to handle the Streamevent service in Fabric?

Then attend this session to learn and see how to utilize and implement the Streamevent service in your dataplatform for both OneLake and the Kusto engine.

In this session I'll guide you through the initial configuration of the service in Fabric and show you how to very easily implement streaming data (IoT, Telemetry etc). into your data platform.

After this session I hope you will have ideas on how to implement Streamevent and what use cases exists which gives meaning to use the service.

- Streamevent
- IoT and telemetry
- OneLake

Microsoft Fabric Real-Time Analytics

In the fast-paced world of data analytics, real-time insights are the driving force behind informed decisions and competitive advantage. The long-awaited moment is here: Real-Time Analytics in Microsoft Fabric has reached general availability (GA), unveiling a wide range of transformative features and capabilities to empower data-driven professionals across diverse domains. Whether you’re an experienced business analyst, a curious citizen data scientist, or a passionate data engineer, Real-Time Analytics is your gateway to endless possibilities.

Flying High Amongst the Clouds with Fabric Warehouses

Concorde was the fastest commercial airplane there ever was. With its sleek design and supersonic engines, it was light years ahead of its time. Now we are streaking through the clouds with Microsoft Fabric with it’s sleek UI and turbo-charged engines. Join us for this deep-dive session into the worlds and wonders of the Warehouse service.

We'll take off from runway 101 with an introduction to the Warehouse and compare it with other Azure Data services.

From there, one of the two captains will take over and guide you through available meta-data and underlying elements of the Warehouse service.

We will look at the engine, the fuel and the combustion of the queries and how the engine tries to lure us into thinking that we are managing the data in the OneLake storage. Perhaps not everything is what it seems.

With a swift bank towards the ingestion area, we will dive towards how we can help the engine perform to its best abilities. This in both ingestion, data manipulation and data delivery to the end user - either through Power BI or through data sharing.

Before we have touchdown at our destination, we will see examples of how an architecture for using Warehouse in Fabric could look like, and talk about the pros and cons for using this service.

Your captain will be with you all the way from boarding to touchdown and guide you an interesting flight on 1st class.

From this session you will be fitted with enough knowledge to know how to get the best performance and discover the meta-data from Warehouses in Fabric and know where it fits in an architecture for analytics.

From zero to Kusto hero

Do you want to start your journey in learning the Kusto query language (KQL) and are willing to give it a go in 100 mins?
Then this is the session for you.

The KQL language is used widely in the Microsoft Azure platform. From Log analytics over Sentinel to the Real-Time analytics service within Fabric.

In this session I will give you all the step stones and knowledge to start using KQL.

We start off on the taxi way with introduction to the language and its structure and the short version of the history of the Kusto engine.

From here we rush into take-off and ascend into the level of data discovery and outlier detection, with a springle of KQL Functions and the Event Stream service on top.

We end with use cases for KQL databases and how it works with the Data Activator service from Fabric.

Before touch-down every attendee will be able to write their own KQL statements and know how to leverage the services around the Kusto engine to gain the most out of their entire data estate.

Book you ticket and lets go from zero to Kusto hero

Eventstream and Dataactivator

Data Activator is a new product in Microsoft Fabric that monitors data in Power BI reports and Eventstream for when the data hits certain thresholds or matches other patterns.

It can take appropriate actions like alerts through Teams and Outlook or start a Power Automate flow to do almost every task in the world.

Eventstream is the go-to service in Fabric when working with streaming data (IoT, Telemetry, Events etc). It is very easy to get started with and have some low key methods of doing data manipulation before it is saved at the destination.

In this session I will dive into the two services from the Real-Time analytics.
First I will setup a data stream using the Eventstream service and show you how to use it.
After this I will dive into the Data Activator service and show you how it works and what the possibilities are with this new data-aware-service.

This is a level 100 session and the is interactive as I highly encourage attendees to ask questions during the session.

Are Synapse serverless and Fabric warehouse both Business Class?

In this session I'll make a deep dive into the performance metrics between the Azure Synapse Serverless SQL pool (using CETAS, Views and raw parquet files) and the Fabric Warehouse service (with its Delta Parquet files).

The session takes off on runway 3A towards Datatown on common knowledge of T-SQL code and the usage of the Synapse Serverless SQL pool with the use Pipelines and data modelling. We dive directly into the two engines and try to figure out how we can get the best out of each of them.

We will look into the different performance pros and cons and discuss the usage scenarios between the two approaches in a modern data platform.

This is a L200 session and you should take home from the sesssion:
- Know the pros and cons between the different implementations of data usage in the serverless SQL pool
- Learn the pros and cons in the Fabric Warehouse scenarios
- Know how to choose between them
- Learn the underlying engine for Synapse serverless and Fabric Warehouse to better work with the engine and get the best performance

Upon touchdown you will have a good understanding of the differences between the two engines and if they both are on Business Class level for a flight in your business

Intro to Serverless SQL in Azure Synapse

If you are new to Synapse and want to know how to get started, then this is the session to attend.

I'll guide you through the Synapse application and demo different ways to use the serverless SQL pools from Synapse.

You'll learn how to do simple setups, work with different filetypes from the Blob storage and I'll give you tips and tricks to get the most out of your Synapse platform.

- Learn the basics of Azure Synapse Serverless SQL
- Know the whereabouts of the service
- Go home and implement the service

The Kusto experience in Fabric

Want to make a cutting edge data platform using Kusto in Fabric?
Then come along for a journey into the world and wonders of Kusto and the new Streamevent service in Fabric.

In this session I will guide you through the experience and close collaboration between Fabric and the Kusto database and querysets. Along with how to help Power BI get the most out of your Kusto database with small tweeks.

Springled with a bit of new magic in the Jupiter Notebooks for the Kusto language, we will also set a course towards how the data engineer can begin to leverage the Kusto service.

After watching this level 200 session I hope you'll have found inspiration to work with the Kusto service in Fabric and know how to give the Kusto engine the small push it needs to perform the best in close collabaration with Power BI.

- Kusto database
- Kusto Queryset
- Kusto magic (%kql)
- Power BI
- Streamevent

The Data Warehouse in Fabric

With the integrated service of Data Warehouse in Fabric we are now able to utilise the power of SQL server directly on top of the Data Lakehouse.

I this session I will talk about the new feature set in Data Warehouse in Fabric and show a myriad of architecture blueprints on how to implement the service and use it to it full potential.

With a journey through Delta-lake parquet files, OneLake, External tables and data processing using stored procedures it will a thrilling ride to show you all the great features.

Hang tight and together with me dive into the Data Warehouse service in Microsoft Fabric.

Know the game you are in - and you will not win

Have you ever thought about what game you are in when doing business?

The business you are doing every single day - data, integration, performance or what ever your field of expertise and interest is.

In this session I’ll tell you a story of two different games and two different outcomes - games we all play every single day. And with the right mindset and approach to those games, you’ll learn that you don’t have to win. Either in your business, personally or any other way.

Wether you are a consultant, employed internally or the manager of a team - with the approach to your every day work, you’ll learn to tackle the different, and perhaps sometimes stressful, situations.

From this session you’ll:
- learn to spot the current game you are in
- get tools to play the game right
- return home with a new approach to your everyday life in data

Azure Synapse - CETAS vs views

A deep dive into the performance metrics of Azure serverless sql pool and the different usage scenarios and performance metrics between CETAS and views.
We will look into the different performance pros and cons and discuss the usage scenarios between the two approaches in a modern data platform.

Load patterns for Azure Data Explorer

Ever wondered how to best load data to the Azure Data Explorer and what to consider when doing so?
Then attend this session to learn a myriad of load patterns for data both to and from Azure Data Explorer and Synapse Data Explorer.

I will deep dive into the use cases and how to setup the architecture for best data loading of timeseries data (IoT, EoT, Log etc).

This session targets the developer who is planning to use Azure Data Explorer and want to do things in the right manner when loading and offloading data.

After this session you will be inspired and take home some load patterns to implement directly in your own business.

- Azure Data Explorer
- Synapse Data Explorer
- Load patterns

Understand the Kusto engine and ADX service

Want to be on the verge of cutting edge technology by using Kusto and Azure Data Explorer? The service is evolving fast in the Microsoft universe.
Then attend this session for a journey into the setup of the Azure Data Explorer service and cover the underlying engines way of managing and storing data.

The session will also cover loading patterns to get data to the Kusto engine and some very interesting ways of using the data in the cluster.

This session is targeted to the developers for either reporting or people working with timeseries data and streaming/log data.

After watching this session I hope you will have gained new found inspiration to use the Azure Data Explorer service and to what purpose the service can be leveraged.

- Azure Data Explorer
- Synapse Data Explorer
- Kusto Query Language (KQL)
- The Kusto engine

Azure Synapse serverless - CETAS vs Views vs Parquet

In this session I'll make a deep dive into the performance metrics of the Azure Synapse Serverless SQL pool and show the different usage scenarios for using CETAS, Views and raw Parquet files.

The session takes off on common knowledge of the Synapse Serverless SQL pool and how to use Pipelines and datamodellering. We dive directly into the engine and try to figure out how we can get the best out of the engine.

We will look into the different performance pros and cons and discuss the usage scenarios between the two approaches in a modern data platform.

This is a L200 session and you should take home from the sesssion:
- Know the pros and cons between the different implementations of data usage in the serverless SQL pool
- Know how to choose between them
- Learn the underlying engine for Synapse serverless to better work with the engine and get the best performance

The power between Kusto and Power BI

Kusto (and the Azure Data Explorer) is leveraging the possibility to report on live data from ex. telemetry, IoT devices, logs etc. the Kusto query language in itself is not that hard to understand and use, but there is some key takeaways and good things to know when starting to do analytics on the data using Power BI.

In this session I'll cover the ultimate power between Power BI and Kusto and show how the data from the Kusto cluster can be leveraged into Power BI, build a data model and show how we can help Power BI use the data correctly.
Power BI needs a bit of help to give good performance when working with Kusto.

Kusto-wise this is a level 100, Power BI data model wise a level 200. The session is on a level where everyone can follow along, and use the shown methods directly in their daily work when they come home.

The anti-patterns to my career in data

Not every data professional has a straight line in the career. Sometimes it is 'by accident' that you get to work with data.

In this session I'll give you a glance to my path into the data professional sphere and prove that you (just) have to start and keep trying to learn.

From a small suburban in Denmark where I grew up to a position in one of the biggest European Microsoft partners.

A story from the heart and with transparency on the anti-patterns for developing a career within data.

Driving Data Intelligence with Data Megatrends

The next challenge in data is rapidly becoming clear: How can we scale data value and bring data driven decision making to everyone?

The 3 megatrends shaping modern data strategy – Data Mesh, Data Fabric, Modern Data Stack – are all about crossing the last mile to get data to everyone, not just data experts.

Are these frameworks the road to actually scaling data value?

Key takeaways:
- introduction to the megatrends within data
- how to scale the value of data
- an approach to handle the data objective in the organisation

Timetravel your data

Ever wanted to do timetravel?
Now you can…

With the use of the new technologies from delta lake and databricks it is now possible to have both a time travel option and an out-of-the-box type-2 history option with very little effort.

Join this session to learn:
- setup and prepare your data to the data lake
- deep dive on the best practices and pitfalls around structuring your data
- tips and tricks for the best usage of the data lake’s functional catalogue.l to build your best performing architecture on the data lake

Synapse SQL - serverless vs dedicated

You are given the question. When do I choose the right version of SQL endpoints in Azure Synapse?

In this session you’ll be shown the differences between the two options using live demos and sides and given real life examples on implementations from the small data platform to the large enterprise scale setup with real-time consuming of data.

The session will also cover some of the pitfalls and do’s/don’ts around Synapse SQL.

Before discussions and questions a demo of how to overcome some of the limitations on Synapse SQL as they do not provide the complete functionality from the usual Azure SQL or on-Orem versions.

After this session you will be able to go back to your organization or consultancy and advise on the choice between the two options and be able to help with the implementation.

Kusto - an introduction

Kusto is the language to use for, amongst others, Azure Data Explorer, streaming data, and data from Log Analytics. The language seems to be a bit like SQL, but when things get rolling it can be confusing and hard to get right,

In this introduction session I cover the basics of the Kusto language and how to get started working with streaming, Log analytics and time-series data. I'll take a start from an introduction to the language, usable examples and implementations of different queries. I will also cover the method for working with scalar variables and window functions.

This is a level 100 session with a bit of level 200 at the end. A good session to get started with Kusto and is directed towards developers who are interested in the Kusto language and would like to get started.

Serverless, Classic or Lakehouse – The battle of architectures

Either you want to dip into the realm of Azure Synapse Analytics, or you may already have and feel overwhelmed! What are all these new technologies, design patterns, and terminologies around building a data platform? Are Data Lakehouses a must-have, or will a simpler classic SQL-based architecture suffice?

Tag along with this session where we will guide you through the rabbit hole and cover three of the
current major paradigms in data platform architecture; Serverless Logic Data Warehouse, Classic SQL Warehouse, and the Data Lakehouse with Databricks

How to choose and get the best out
of your choice? In this session, we will cover what you need to understand the different approaches
and knowledge needed to use and choose between the different frameworks.

After attending this session you should feel confident about what direction to move your own data platform and feel inspired by the demos and examples presented.

- Data Platform Paradigms
- Understanding the core principles of the three
- Evaluating solution fit
- Practical implementation

Practical experiences from working with Synapse

You've started the Synapse journey and have faced your first challenges with the platform. What to do now and how to get the best out of it? Maybe you feel alone on the journey and would just like a bit of inspiration from other data platform nerds; our successes and failures and anything in-between.

In this session, we'll show you a series of practical experiences from working with Synapse and how to get the best out of it. How to get started, What challenges you may face, How to overcome them, and how to get the most out of the platform.

You’ll leave with an understanding of the most common mistakes people make when working with Synapse and how to avoid them. After attending this session you should feel more confident about working with Synapse and have a better understanding of how to get the most out of the platform.

- Practical experiences with Azure Synapse Analytics
- Common issues and solutions
- Interactive discussion

Power BI & Azure Synapse Analytics - better together

Want to make a cutting-edge data platform using Power BI with Azure Synapse? Then come along for a
journey through the wonders of powerful data pipelines mixed with top-notch data modeling and
visualization capabilities. A true match made in heaven.
In this session we will talk about the synergies of Azure Synapse Analytics and Power BI, how you
decide which tools to use for which task, sprinkled with a multitude of data architecture blueprint
After watching this session we hope you have a new-found inspiration for the possibilities of combining
Power BI and Azure Synapse Analytics, as well as a solid foundation of do’s and don’ts when building
your own modern BI data platform.

- Power BI and Azure Synapse Analytics
- Modern BI data architecture
- Synergies and capabilities
- Pipelines, data serving, and integration

Microsoft Azure Pakistan Community - ( Meetup - Call for Speakers) -2022 User group Sessionize Event Upcoming

Not scheduled yet.

Data Community Austria Day 2024 Sessionize Event

January 2024 Vienna, Austria

Newcastle Power BI User Group

January 2024 Newcastle upon Tyne, United Kingdom

DATA BASH '23 Sessionize Event

November 2023

#DataWeekender 6.5 Sessionize Event

November 2023

Data Platform Diversity, Equity, and Inclusion Virtual Group User group Sessionize Event

October 2023

Data Relay 2023 Sessionize Event

October 2023

SQL Konferenz 2023 Sessionize Event

September 2023 Hanau am Main, Germany

Data Saturday Gothenburg 2023 Sessionize Event

August 2023 Göteborg, Sweden

Data Toboggan - Cool Runnings 2023 Sessionize Event

June 2023

Data Saturday Rheinland 2023 Sessionize Event

June 2023 Sankt Augustin, Germany

SQL Friday Season 6 (January - June 2023) User group Sessionize Event

June 2023

Data Saturday Croatia 2023 Sessionize Event

June 2023 Zagreb, Croatia

Days Of Knowledge Nordic 2023 Sessionize Event

May 2023 Odense, Denmark

APAC Azure Community

March 2023 Singapore

SQLBits 2023 - General Sessions Sessionize Event

March 2023 Newport, United Kingdom

Data Swindon User group Sessionize Event

February 2023

Data Toboggan 2023 Sessionize Event

January 2023

Data Saturday Parma 2022 Sessionize Event

November 2022 Parma, Italy

PASS Community Summit 2022

Lessons learned from working with Azure Synapse Analytics

November 2022 Seattle, Washington, United States

Computerworld - Enterprise Architecture day 2022

Don't forget the Data in enterprise architecture

September 2022 Kongens Lyngby, Denmark

DATA:Scotland 2022 Sessionize Event

September 2022 Glasgow, United Kingdom

Brian Bønk

Principal Architect, Data & Analytics

Køge, Denmark

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top