Benni De Jagere
No coffee? No insights!
Benni is a Senior Program Manager in the Fabric Customer Advisory Team (Fabric CAT) at Microsoft. Aspiring to be top notch in his field, through continuous personal development, on both technical and soft skills. He strives for maximum results in his tasks using team play, communication, thinking outside of the box, and (endless) motivation. Benni is always ready to tackle the unknown, or pick up fresh ideas to broaden his range. Building on past experiences, he continuously tries to find new, more efficient ways of obtaining results, and improving the process along the way.
Loving (almost) every day of it, he’s fascinated by the value of data, sometimes flabbergasted by the lack of awareness, and intrigued by the endless possibilities whilst discovering new ways of looking at data. He thrives on unfolding new insights for customers whilst using an open and transparent communication.
On a daily basis he turns (large amounts of) coffee into insights for customers, and references witty British comedy, lame dad jokes, and obscure facts way too often. Overly enthusiastic about anything data related, he’s trying hard to keep up with all things new and shiny.
When not working, blogging or reading, you’ll likely find him out and about being his weird self. Rumour has it that he’s also involved with a ragtag band of data enthusiasts, enjoying themselves whilst organising cool community things.
They go by the name of .. dataMinds!
Area of Expertise
Perhaps you've seen "Star Schema ALL the things!", "Never use Calculated Columns", or "Bi-Directional relationships suck" before when thinking about design considerations for your data model, but you've never really stopped to think about the specifics behind them and why exactly they could benefit or hurt your model. Who knows, maybe that specific advice doesn't even work out for the scenario at hand and you might not be aware because you've skipped a few steps in the process.
Come along in this journey from source to model to report using a practical mindset, thinking about the design decisions and ramifications along the way. At the core of the session lies the message to think about best practices, with the added step to test, assess, and benchmark what exactly they do for you.
Whether it be the decision of where your transformations need to be done, how exactly the data and tables need to be modelled or what you allow the end users to do with your model these are all important steps to take, preferably without shortcuts. We'll take the steps on a moderately complex data model, and measure as we move along.
Meaning, at the end of the session we'll have discussed why Star Schema's can help you, and how you can assess for yourselves if they are beneficial for your use case.
May 23rd 2023 will be remembered by many as a day that shook the Data & AI ecosystem for Microsoft, potentially even beyond. Many of the Power BI Admins lived their life of dealing with the woes and worries of a reporting environment, until that very day a few things were made public. With the announcement of Fabric, an all-in-one Analytics solutions, covering a whole new range of possibilities, this means Power BI Admins have a lot more ground to cover. But how will they do it? How will they keep up with all the new options and responsibilities.
This session will focus on the life of a Power BI Admin moving over to the extended Fabric realm, and keeping their things in check. We'll look at the familiar topics of capacity and workspace management, and how they align with the new workloads that have been introduced. We'll notice there's a very large overlap, as Fabric has been built on the foundations of Power BI.
To wrap up, we'll look at some of the new tools that were introduced to help you keep an overview of the tenant, and make sure the right data gets to the right people. This means getting an inventory of our tenant settings, and setting up alerts for when something changes, working with the newly introduced Admin Monitoring workspace to have an easy overview of what is happening where.
Walking out of this session, a Power BI Admin should feel more confident about Fabric, and be convinced it can actually make their life easier.
Many companies have invested heavily in building data lakes to store large volumes of structured and unstructured data from various sources into Delta Parquet files. These Delta Parquet files can be used for a wide range of Analytics and Business Intelligence applications. Most of these organizations struggle to derive insights from their investments due to the complexity of accessing and querying the data, and how to let self-service users connect to this data in the lake using Power BI.
With the introduction of Microsoft Fabric, an all-in-one analytics solution for enterprises, we now have a better approach for this. In this session, we will explore how to use Lakehouse data at scale with Power BI, using the new Direct Lake connectivity mode. Power BI Direct Lake combines the best of both worlds from Import and DirectQuery mode, and gives us the option for great performance over data in the lake, without introducing additional latency for dataset refreshes.
We will start by discussing the benefits of the Lakehouse architecture and how it can improve data management and analytics. We will then move on to explore how to connect to Lakehouse data using Power BI by combining both of these architecture components and using each of them to their strengths.
We will also cover best practices for optimizing performance when working with large volumes of data, including using data partitioning and query optimization techniques. We will demonstrate how to use Power BI to analyze Lakehouse data in real-time and how to build reports that provide actionable insights for decision-making.
By the end of the session, attendees will have a solid understanding of how to leverage Lakehouse data at scale with Power BI and how to build powerful analytics solutions that can handle massive amounts of data. Whether you are a data analyst, data scientist, or BI professional, this session will provide you with valuable insights into the world of Lakehouse data and Power BI, featuring the new Direct Lake connectivity mode.
"Pie charts suck!", "Never use pie charts!", or "Can you replace that hideous pie chart?" are probably some of the sentences you've heard when you've proudly presented your latest report to make the corporate data more digestible, or when you've ventured on the wondrous wide world web.
Pie charts have often been at the center of heated conversations between many data visualisation enthusiasts and/or data artists. But do they really need to be "killed with fire", or do they actually serve some good purpose?
During this short lightning talk, we'll discuss the "aha's", "gotchas", and "oh, right .." factors of using a pie chart in your reports. Aimed at making the point that not all pie charts are bad, but they should be used correctly, this will be a fiery rant you don't want to miss.
When gathering requirements for your new model and reports, you've probably heard it before.. "We want to get real-time results from our source", or "We don't want to store data in Power BI, because it's already in the cloud". Maybe the stakeholders don't want to wait for that model refresh that will take way too long in their opinion. Who knows, plenty of arguments have been raised before, some with more sense than others. Having worked through my fair share of troubleshooting scenarios, I noticed some recurring themes when DirectQuery was at play.
Using a DirectQuery connectivity mode in Power BI allows you to achieve great results, if executed correctly. With less room for error and interpretation, a DirectQuery approach requires the different cogs in the chain to work well together, to ensure a smooth process.
During this session, we'll discuss some of the common patterns that make opting for a DirectQuery approach a valid scenario. Then, we'll make sure we cover the best practices around optimizing the data source, data model, and reports, so we can keep our query performance at an optimal level. Then, we'll discuss how advanced modelling techniques like Hybrid Tables, Composite Models, Aggregation Tables, and Auto Aggregations can make this scenario a bit more robust.
Last but not least, we'll also discuss the practical usage details you have to educate your end users on, as a report in DirectQuery will be more prone to reduced performance by end user interactions.
Have you ever asked a question, only to get a vague answer? Tired of hearing 'It Depends'? Or maybe you've even been frustrated that you're not getting the answer you were hoping for.
Asking questions is a very powerful tool, but can be bothersome to wield. By making sure we're supplying the other party with the compact information they need, and making sure we're "forcing" them to give a decent answer, there's a lot more information we can receive. By getting the information we need, we'll be able to proceed quicker, and getting to results faster.
It can be a tricky process, but with a few basic things to keep in mind, we can continually improve ourselves, for a better understanding.
When submitting a session abstract to a conference or user group, you've probably thought: "Ehhhh, I'll put some more details in that later". But then time goes by, and other things will have happened.
Writing an abstract is hard, especially for a session you'll probably only be presenting a few months down the road. But have you considered consciously on the words you're writing in there, and what kind of expectations they set to other parties?
Based on experience as a conference organiser, speaker and attendee, we'll dive into the key segments that should be included in your session abstract, to increase your chances of being selected. After all, if the selection committee understands what you'll be explaining, this can only be a good thing. Right?
Having a balanced abstract will also help attendees make the choice if your session is the right one for them and set the expectations correctly before they attend. And of course, you'll help yourself by clearly outlining the content you'll be discussing up front.
** These are based on personal experiences, and hence subjective. Attendees are free to use their own will and creativity to create their own set of handles.
For many organisations, Power BI is becoming absolute key in their Information Delivery stream. As an important cog in the chain, it's the responsibility for that organisation to keep the cog well oiled.
Power BI Tenant Admins need to make sure they're on top of their game, to keep all parties involved on their good side. During this talk, we'll go through some of the key activities to optimise this process, based on my war stories as a consultant.
Security, Auditing, Monitoring and Alerting are the cornerstone of this process, which doesn't have to be hard. Walking out, you'll have some practical tips to take home with you.
Your company is thinking about investing into Power BI Premium, or has it already taking the plunge? Quite common, I've encountered belief that all (Power BI related) worries will magically disappear when switching over to Premium Capacity. Out in the wild, the reality is often different. Power BI Premium Capacity requires some attention, but doing so will result in the ability to do great things!
During this talk, I'll touch base on some of the key activities your organisation needs to perform, to make sure your investment is one that pays off.
Using the Premium Capacity Metrics App as our base of operation, we'll expand into auditing, licensing, and some common dataset performance patterns, to make sure you're up to the task!
So you've built a Power BI report with all the shinies? Good!
After some usage, users have reacted that it's a bit sluggish in usage, and they're not keen on using it anymore.
Starting off with the new Performance Analyzer Pane, we'll demonstrate how you can pinpoint bottlenecks in your report, and take actions on these insights.
With the myriad of root causes that are out there, an elimination approach might suit us very well.
Some of the main causes will be explained, and we'll get you going on how you can fix it for yourselves.
This talk will place emphasis on leveraging external free community tools to assess the state of the data model, and go further on fixing the groundworks of your solution.
Attendees should note that is not a DAX Deep Dive class, before wandering in :-)
Microsoft Power BI is all the rage, with enterprise and user adoption soaring year after year. New users get onboarded, reports,models, dashboards, dataflows, .. get created and tweaked every day. Whoa, things must be going great if we have so many things happening every day! Right?
And yet, as the usage skyrockets in our organisation, it's crucial to keep a solid oversight of the artefacts and activities generated by our colleagues. All too often, organisations have procedures, governance models, methodologies, and more in place, without having a structural way of making sure it actually works for the business processes. Enabling the business to make their decision on usage information for their artefacts will assist them in making informed decisions. Making sure this information gets to Data Stewards (workspace owners) is often left untouched, as there's no ready to go solution out there.
This session will focus on creating a Self-Service Power BI model and report, where users can follow up on activities performed on artefacts in their zone of control. We'll start with collecting the necessary information through various APIs, defining a model with Row-Level Security, and layer a Power BI Report on top of this.
Leveraging a solution like this, can drastically improve the general overview of the general usage of Power BI in the organisation, and make informed decisions for the BI Information Delivery flow.
Microsoft Fabric introduced the concept of Dataflows (Gen 2) to the general public. Fabric Dataflows enable users to access and leverage data from different environments and sources in a low/no code way within Fabric. Sounds awesome, right? How can we get started?
Using the familiar Power Query authoring experience, we can extract, transform, and load data into different places within Fabric, depending on our use cases. We will explain the basic architectural components to give you a solid foundation, and then we will explore some actual use cases within the solutions you might be building. We will also highlight some potential pitfalls that might affect your experience.
Fabric Dataflows have an audience of business users in mind who want to work with their data in a seamless way, though they also offer some interesting capabilities for more experienced Data Engineers. We will show how business users can play a vital role in helping to build out those elusive Enterprise Data Platforms and Models in a co-creating method.
Finally, we'll discuss how you can integrate these Dataflows into your overall structure for refreshing the data, to make sure they are processed at the right time, to the right place.
Join us on an exciting journey as we dive into Microsoft Fabric to explore the exciting new possibilities. In this session, we will explore the new workload options that Microsoft Fabric brings to the table and demonstrate how they can energize your data analytics experience.
During the guided tour, we will navigate through the key features and functionalities of Microsoft Fabric, and shedding light on its seamless integration with Power BI. We will showcase how this powerful combination allows users to leverage the full potential of their data to gain valuable insights and make data-driven decisions.
Throughout the session, you will discover:
1. Introduction to Microsoft Fabric: Gain a fundamental understanding of the platform and its core principles, highlighting its ability to handle diverse workloads efficiently.
2. Capacities, and how they work. How exactly are they different from what you’re used to with other Azure Data Services?
3. Workload Options: Explore the newly introduced workload options provided by Microsoft Fabric and understand how they complement and enhance Power BI's capabilities.
4. Integration with Power BI: Witness the seamless integration between Microsoft Fabric and Power BI, enabling users to seamlessly transition from data ingestion to data visualization and reporting.
Whether you are a data analyst, business intelligence professional, or an executive looking to unlock the true potential of your data, this session will provide you with valuable insights and practical knowledge about Microsoft Fabric from a Power BI perspective. Join us to discover the next frontier of data analytics and embark on a journey towards data-driven excellence.
You've heard about Microsoft Fabric, and you're ready to take it for a spin? Excellent, let's get us started off in those few advertised minutes! But hold on .. you need a capacity to actually use something, and might not be completely clear on what it actually entails? You're not alone with these questions, and it is perfectly fine to stop and think about it for a while. In fact, it's a good thing you want to understand the single most core concept of Fabric as that will hopefully allow you to make better decisions down the road.
The introduction of Fabric Capacities sparked a lot of questions with Data Architects, Engineers, and Analysts coming from an IaaS or Paas (Infrastructure or Platform as a Service) way of working. Microsoft Fabric is presented as an all-in-one Analytics SaaS (Software as a Service) solution, with a unified measure for Compute and Storage. Great, promising to make the cost and performance predictability a lot simpler. Great! But what exactly does that mean, and what will it actually cost the company?
To understand Fabric Capacities, we need to briefly look at the architecture and what exactly those unified measures look like, including how they are similar, yet different from the existing Power BI Premium Capacities. Understanding the different types and sizes of capacities will help us make the right decisions for our Data Platform solutions in the organization.
But then, how do you manage those capacities and assess if they are in a healthy state? What are some of the options to follow the demands and needs of your business users to allocate the right resources to them? Most importantly, what options do I have to automate the majority of these tasks?
Walking out of the session, you should understand the key concept of Fabric Capacities and how they are at the core of everything you'll do in Microsoft Fabric, be able to choose the one that is right for you, periodically assess if the choice was right, and act where needed.
You've decided to take the plunge, and create a brand new session you want to share with the world? Maybe you've tried a few times already, and didn't manage to find a place to present? These days more than ever, a lot of people have picked up the interested in public speaking making it harder to "stand out". Having a stellar abstract alone won't be enough in many occasions, so we have to do the work to skew those odds a little in our favour.
Whether you're just wanting to burst onto the scene as a new presenter, you've had your first couple of gigs under your belt, or you're practically part of the furniture, it's always hard to get selected to present at an event, conference or user group. The key thing to understand is that rejection is a part of it, and it's how you choose to deal with it that will determine your future attempts.
During this session we'll cover or how you find those elusive opportunities in the first place and how you can make the most of it with just a few simple things to keep in mind. Most importantly, what can you do to help understand organizers or selection committees why exactly your session would be a good fit for them, and why you think it would be a good fit for their audience.
This session does not cover the elements of writing session abstracts, or determining the right audience for those sessions. Attendees should understand that these are subjective points of view, and will not guarantee success.
Data Platform Next Step Upcoming
Data Point Prague Upcoming
Data Saturday Stockholm 2024 Upcoming
DataGrillen 2024 Upcoming
SQLDay 2024 Upcoming
SQLBits 2024 - General Sessions Upcoming
Power BI Gebruikersdag 2024 Upcoming
Data Saturday Denmark - 2023
Future Data Driven
Benni De Jagere
No coffee? No insights!