Most Active Speaker

Koen Verbeeck

Koen Verbeeck

Senior Consultant @ AE - Data Platform MVP

Rotselaar, Belgium

Koen Verbeeck is a data professional working at AE. He helps organizations to get insight in their data and to improve their analytics solutions. Koen has over a decade of experience in developing data warehouses, models and reports using the Microsoft data platform. Since 2017 Koen is a Microsoft Data Platform MVP. He has a blog at http://www.sqlkover.com, writes articles for MSSQLTips.com and is a frequent speaker at Microsoft Data Platform events.

Awards

Area of Expertise

  • Information & Communications Technology

Topics

  • Azure Data Factory
  • sql server
  • SQL Server Analysis Services
  • SQL Server Integration Services
  • Azure SQL Database
  • Data Visualization
  • power bi
  • SQL Server Reporting Services
  • master data services

Raiders of the Lost ADX

Often we need to find a way to uncover the hidden insights buried deep within the data jungles of the company.
This can be a challenging task, but myths talk about the legendary Azure Data Explorer (ADX). Many claim it can handle even the most massive and complex data sets, as it's a brand new timeseries database in Azure.

If you can handle the adventure, join our two explorers when they set out on a journey to discover the vast capabilities of ADX.
We'll navigate through the setup and configuration of ADX like Indiana Jones did through ancient ruins. With their whip-like queries we'll uncover the powers of the Kusto Query Language (KQL) to solve our data puzzles.
As we delve deeper into this data exploration, we'll find out that ADX is much more than a simple tool; it's like having a trusty sidekick by our side, helping us to navigate the treacherous waters of data analysis.
With much adventure, we'll also unearth the link between ADX and Azure Cosmos DB.

At the end of this ride, the audience will realize that with ADX on their side, they too can become daring data explorers. They will know when to use ADX and what its strengths and weaknesses are.

Migrate your SSIS skills to Azure Data Factory

Integration Services is the new Cobol. Well at least according to some people, but there might be some truth to that statement. Azure Data Factory (ADF) or Fabric Pipelines are now looked at as the replacement of SSIS in the cloud.
In this session, we will take a look at how you can leverage your SSIS skills in ADF. We will also look at some patterns that were valid for SSIS but are a bad idea in ADF, or the other way around. At the end, you should feel confident to take up new projects in ADF.
The target audience of this session are BI/data warehouse developers & data engineers who have experience with SSIS (or other ETL tools), but little to no experience with ADF.

Microsoft Fabric for Dummies

Microsoft Fabric is the new all-encompassing data platform of Microsoft. But there are so many components, services and features to learn, it seems overwhelming.

In this session, we take a step back and start from the beginning. We will cover the architectural overview of Fabric, its building blocks and which use cases are covered by this platform.

At the end, you should be able to assess if Fabric is the right choice for your environment.

How I saved 80% on my Azure Data Factory Costs

Azure Data Factory (ADF) is the tool in the Azure cloud for creating your data pipelines. It allows you to move and transform data from a multitude of sources to as many possible destinations. However, like most cloud services, it is not free. If you're not careful, you might end up with an unexpected high bill at the end of the month.
In this session, I'll give an overview of the ADF building blocks and explain a couple of design patterns to save you money. At the end, you will know which pitfalls to avoid, and how to keep your CFO happy.

Analyzing Cosmos DB Data with Azure Synapse Link

Azure Cosmos DB is a fully managed NoSQL database, offering different APIs and scalable across the globe. You can use it as a document database, where you store many different JSON files for example.
But what happens when you need to run large analytical queries on top of your Cosmos DB? If the data is partitioned differently than your query access path, you might end scanning the whole thing?

In this session, we will walk you through the concept of the analytical store on Cosmos DB, and how this can work together with the Azure Synapse Link feature of Azure Synapse to enable painless analytics on your data. We swill show you how you can use Serverless SQL and Spark Pools to query the data. At the end, you should be able to assess if this solution will be a good match for your use cases.

Basic knowledge of SQL, JSON and Azure Data Factory is assumed.

Moving Data around in Azure

In the Microsoft Data platform ecosystem in Azure, there are several possibilities to move data around. The choice can be overwhelming and it's not always clear which option should be used.
In this session, we aim to give you an overview of a couple of products in Azure for data pipelines: Azure Data Factory, Logic Apps, Integration Services et cetera. At the end, you'll have a better understanding of the various alternatives in Azure.

Introduction to Azure Data Factory

Azure Data Factory (ADF) is the tool in the Azure cloud for creating your data pipelines. It allows you to move and transform data from a multitude of sources to as many possible destinations. In this session, we will give an introduction to this product and demonstrate how you can build your first pipeline.
The following topics are covered:
* creating a pipeline and scheduling it for execution
* monitoring the results
* migrating SSIS packages to ADF
* what are mapping and wrangling data flows?

You don't need any special knowledge for this session, but an understanding of the Azure cloud and ETL tools are a plus.

Finding and Fixing T-SQL Anti-Patterns with ScriptDOM

Quality code is free of things we call 'anti-patterns' - nolock hints, using SELECT *, queries without table aliases and so on.
We may also need to enforce certains standards: naming conventions, ending statements with semicolons, indenting code the right way etc. Furthermore, we may need to apply specific configurations on database objects, such as to create tables on certain filegroups or use specific settings for indexes.

All of this may be easy with a small database and a small volume of code to handle, but what happens when we need to deal with a large volume of code? What if we inherit something full of these anti-patterns, and we just don't have time to go through all of it manually and fix it? But suppose we had an automated utility that could do this for us? Even better, if we could integrate it in our Azure Devops pipelines?

ScriptDOM is a lesser-known free tool from SQL Server DacFx which has the ability to help with finding programmatic and stylistic errors (a.k.a linting) in T-SQL code.It can even fix some of these errors!
In this session we will learn about what it is, how we can harness its power to read code and tell us what it finds, and actually fix some of those anti-patterns.
Join us for this highly interactive and demo-packed session for great insights on how to improve the quality of your code. Basic knowledge of T-SQL and Powershell is recommended to get the most out of this session.

feedback link: https://sqlb.it/?7110

Fast Forward your Career by Contributing to the Community

Getting involved in the Data Platform community can be beneficial for your career, both short-term as long-term. Not everyone has the desire to jump on a stage and start speaking for an audience, but luckily you don't need to! There are many options available to build your personal brand out there. In this session, we'll go over these options and see how they can have a great impact on your profession live!

T-SQL Best Practices and Tuning for the BI Developer

When building a data warehouse in the Microsoft Data Platform stack, you typically have to write a fair share of T-SQL. Loading data into the data warehouse, transforming it into the desired format, troubleshooting and debugging, writing source queries for reports; all of those task require some T-SQL and preferably it's fast as well.
In this session, we'll go over some T-SQL best practices from the viewpoint of the BI developer. We'll also cover indexing for the data warehouse and how you can make that SSRS report a bit faster. At the end, you'll walk out with solid knowledge on how to improve your T-SQL skills and deliver results faster. Basic knowledge of T-SQL and SQL Server is assumed.

Practical Data Visualization Best Practices

Data visualization is the next hot topic! Tools like Power BI and Tableau make it easy to create stunning visualizations and dashboards. Creating data visualizations that effectively communicate data can take a bit more effort and consideration. In this session:

* an introduction to the key concepts of data visualization
* what is visual perception and how can it help us?
* why does everyone love pie charts but we shouldn't?
* how can we lie with charts?

The modern Cloud Data Warehouse - Snowflake on Azure

The era of cloud data warehousing has finally begun. Gone are the days where you had to provision expensive servers and spend days configuring and tweaking to get the technical details right. Using cloud infrastructure, we can skip past the technical set-up and start loading data immediately.

Snowflake is a vendor offering a native cloud data warehouse, hosted on Azure. In this session, we'll introduce you to this new technology and explain the important concepts to get you started. Walking out of this session, you'll have all the knowledge you need to embark a project with Snowflake.

Supercharge your data warehouse with Columnstore Indexes

BI solutions nowadays need to be faster, bigger and more real-time. How can your data warehouse handle enormous amounts of data and still have blazing fast performance? In this session we will cover the technology of columnstore indexes:

* what are they and how can you use them?
* how do you get data in them? And out?
* what kind of performance can you expect?

Join Koen in this session to learn how you can make your data warehouse faster!

SQL Server Integration Services Best Practices and Tuning Tips

Did you ever inherited a slow running un-maintainable mess of an SSIS package? Or do you want to knock up your SSIS developing skills to the next level?

In this session we will guide you through common SSIS best practices and performance pitfalls. We will make packages easier to read, maintain and more efficient. You will learn some practical tips to make your life – and that of your successor – easier.

Topics covered in this session:

* How to quickly layout your SSIS packages and how to add documentation
* A few hidden tricks of SSIS
* Best practices for SSIS development
* A couple of quick performance boosters

Join us to learn how to be a top notch SSIS developer.

Self-service BI - Why Data Modelling Is Still Important

Self-service business intelligence is on the rise. Tools like Power BI, Power Query and Power Pivot make it easy for everyone to create a powerful model that can hold millions of rows. However, it's not all cookies and rainbows. To get good performance and correct results, a good data model is required. In this session, we go over the basics of data modelling for business analysis. You will learn to apply a set of guidelines to make your models better, faster and more robust.

Migrating SSIS to the Cloud

Integration Services is a mature ETL product that has been around for more than a decade. A true alternative is still missing in the Azure cloud, but you can migrate your existing projects to Azure. In this session, we'll cover the options you have for your migration project and we'll go into detail on how we can run SSIS packages in Azure Data Factory.

Data Warehouse Automation with metadata driven development. For free!

Working on your 20th staging package? Do you need to import 30 different flat files? Don't have the time to create all those packages?
With Biml - Business Intelligence Markup Language - you can cut back on development time by
* generating your SSIS packages and database object using metadata.
* turning tedious, repetitive work into reusable scripts
* automating general ETL patterns.

In this session, we will go over the basics of Biml and BimlScript and learn how we can use metadata to create a framework for automating your data integration tasks.
After this session, you can start with the automation of your SSIS packages.
A working knowledge of Integration Services package development is necessary for this session.

SQLBits 2022 Sessionize Event

March 2022 London, United Kingdom

PASS Data Community Summit 2021 Sessionize Event

November 2021

dataMinds Connect 2021 Sessionize Event

October 2021 Mechelen, Belgium

Techorama 2021 Spring Edition Sessionize Event

May 2021 Antwerpen, Belgium

dataMinds Saturday Sessionize Event

March 2021

dataMinds Connect 2020 (Virtual Edition) Sessionize Event

October 2020 Mechelen, Belgium

CloudBrew 2019 - A two-day Microsoft Azure event Sessionize Event

December 2019 Mechelen, Belgium

dataMinds Connect 2019 Sessionize Event

October 2019 Mechelen, Belgium

DataGrillen 2019 Sessionize Event

June 2019 Lingen, Germany

Dataminds Connect 2018

Beyond the basics - Take your Biml skills to the next level

October 2018 Gent, Belgium

SQLSaturday Holland 2018

Introduction to Biml - Generating your SSIS packages from scratch

October 2018 Utrecht, The Netherlands

Koen Verbeeck

Senior Consultant @ AE - Data Platform MVP

Rotselaar, Belgium

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top