Most Active Speaker

Just Blindbæk

Just Blindbæk

Microsoft BI architect, trainer, speaker and MVP | twoday

Århus, Denmark

Actions

BI architect with extensive experience in all phases of BI development on Microsoft SQL Server, Azure, Fabric and Power BI. Founder and coordinator of Microsoft Business Intelligence Professionals Denmark (MsBIP.dk) and Power BI UG Denmark (PowerBI.dk). Is a Microsoft Certified Trainer.

Badges

Area of Expertise

  • Information & Communications Technology

Topics

  • Microsoft Power BI
  • Microsoft Fabric
  • Azure Synapse Analytics
  • Data Engineering
  • Microsoft Data Platform
  • Azure Data Factory
  • power bi

Mastering Fabric Administration

Join us for a comprehensive workshop designed for Fabric administrators and those who want to understand the administrative capabilities within Microsoft Fabric. Whether you are stepping into the role of a Fabric administrator or want to collaborate more effectively with one, this session offers a deep dive into tools, settings, and best practices.

We’ll start at the tenant-level administration, including managing settings, access controls, compliance, delegation, and network security with Private Link and Microsoft Entra ID Conditional Access. From there we’ll dive into capacity administration, focusing on roles, retention policies, and configurations at the workspace level. With actionable insights from tools like the Monitoring Hub, you’ll learn how to monitor workspaces efficiently and ensure smooth operations. Automation is a core concept, and we’ll explore automation opportunities using APIs to streamline administrative tasks, and monitoring strategies to navigate built-in monitoring options and how to develop custom solutions to meet your organization needs. You’ll also learn gateway administration, covering architecture and management best practices.

Through real-world scenarios and hands-on exercises, you’ll gain practical skills to manage and secure your Fabric environment confidently, finishing with an interactive Q&A to address your questions.

Mastering Spark Notebooks and Capacity Optimization in Microsoft Fabric

Running Spark notebooks in Microsoft Fabric opens up powerful possibilities—but also introduces a compute model that can feel unfamiliar, especially to those coming from a traditional SQL Server or data warehouse background. Every notebook gets its own dedicated compute session, and while this provides strong isolation, it can quickly lead to unexpected capacity consumption and limits if not managed thoughtfully.

This session offers a deep dive into how Spark compute works under the hood in Fabric, with a focus on how to run more efficiently without wasting Capacity Units. We’ll explore the impact of autoscaling Spark pools, how bursting works in practice, and the introduction of the new Autoscale Billing model that charges based on actual vCore usage per second, rather than maximum allocation. You’ll learn how to take control of your workloads through techniques like using small or single-node Spark pools, orchestrating notebooks with runMultiple(), and sharing sessions through High Concurrency Mode—both interactively and within Pipelines.

Whether you're building data pipelines, running exploratory work, or managing shared capacity across a team, this session will help you understand how Spark in Fabric behaves, how it’s billed, and how to optimize it for both performance and cost.

Fabric Monitoring Made Simple: Built-In Tools and Custom Solutions

As organizations increasingly rely on Microsoft Fabric for their data needs, effective monitoring becomes essential—not just for performance optimization, but also for ensuring security, tracking adoption, and maintaining compliance.

In this session, we’ll explore the full spectrum of monitoring options in Fabric. You’ll learn how to leverage Microsoft’s built-in tools—such as the Monitoring Hub, Admin Monitoring workspace, and Workspace Monitoring—to gain valuable insights and improve operational efficiency.

We’ll also introduce the Fabric Unified Admin Monitoring (FUAM) solution—a powerful, open-source framework developed by the community and supported by Microsoft. FUAM bridges the gap between built-in tools and fully custom monitoring by offering a scalable, extendable approach to tenant-wide visibility.

Finally, we’ll demonstrate how to take things further by building your own monitoring pipelines using Fabric Data Factory and Fabric Spark, tailored to meet specific organizational requirements.

By the end of this session, you’ll have a solid understanding of what’s available out of the box, how FUAM can accelerate your admin insights, and how to develop custom solutions when you need maximum flexibility. Whether your goal is to save time, reduce costs, or strengthen your data governance, this session will equip you with the knowledge to succeed.

Optimizing Fabric: A Deep Dive into Workspace Management

Fabric provides a comprehensive analytical solution, but it can quickly become overwhelming without establishing clear guidelines for workspace creation and utilization. Given the ever-expanding range of resources that Fabric offers, effective organization is paramount. This not only enhances the overall user experience but also plays a pivotal role in ensuring security and permissions are appropriately managed.

In this session, we will delve into best practices for governing workspaces within your tenant. These guidelines are designed to cater to both enterprise-level users and those engaging in self-service operations. You'll receive a practical checklist tailored to various use cases, allowing you to kickstart your workspace management or streamline existing setups with ease.

From Raw to Refined: Building a Metadata-Driven Lakehouse in Microsoft Fabric

Ever wondered what it really takes to go from messy raw data to polished insights in Microsoft Fabric?

In this demo-heavy session, we’ll show you exactly that—live. Together, we’ll build a metadata-driven Lakehouse using an open-source accelerator designed to help data professionals deliver high-quality, reusable solutions faster.

You’ll see how a layered medallion architecture (Landing, Base, Curated) simplifies data transformation. With PySpark and Spark SQL, we’ll automate data cleaning, validation, and loading of facts and dimensions—turning what’s often a complex process into a clean and structured workflow.

This session is for Fabric users who want to:
- Accelerate their Lakehouse development with ready-to-use tools
- Understand how metadata can drive automation and consistency
- Learn practical techniques for shaping data into star schemas using Spark

You’ll leave with real code, working patterns, and a clear path to building better data pipelines in Fabric.

Demystifying the Data Lakehouse in Fabric

Are you a Classic BI and Data Warehouse (DWH) developer eager to understand the Data Lakehouse concept that's taking the data world by storm? This session is tailored just for you.

Delve into key questions: What exactly is a Data Lakehouse, and why are terms like bronze, silver, and gold gaining popularity? Should you embrace Python or can you rely on trusty SQL?

Discover the power of decoupling storage and compute, where storage is cheap, compute is expensive, and you can easily scale compute for specific tasks. Learn about OneLake and Delta Lake and why mastering PySpark makes sense for tasks like data cleaning, handling semi-structured data, and integrating with API-based sources like event streams.

But don't forget, SQL is still cool. It's an excellent practice for defining business logic and simplifies porting logic between platforms. With Spark SQL, it's not far from what you know. Plus, explore how to leverage Spark notebooks with mark-up for documentation.

Join us to unravel the Data Lakehouse in Fabric and learn how to make the right choices for your data development journey.

Mastering Cross Semantic Model KPI Reporting in Power BI

Organizations often build multiple semantic models - Finance, Sales, Operations - each maintained separately. But when leadership asks for a “single source of truth,” how do you deliver consistent KPI reporting across them all?

In this session, we’ll explore the main approaches to cross-model KPI reporting and the trade-offs of each:

Dashboards and Scorecards – easy to set up and great for quick wins, but limited in flexibility, governance, and depth.

Mega model – a single, consolidated model that attempts to hold everything. While tempting, it often fails due to complexity, governance challenges, and ongoing maintenance overhead.

Unified semantic model – a middle ground that brings together only what’s needed from existing domain models. We’ll explore when this pattern makes sense, when it doesn’t, and the design principles that make it work: reusing logic, minimizing duplication, and maintaining independence of source models.

You’ll leave with a clear decision framework, a practical understanding of the available options, and inspiration for applying a ready-to-use pattern that enables consistent KPI reporting across multiple semantic models.

The Hitchhiker’s Guide to Navigating Power BI

Do your users ever get lost in Power BI? Between reports, apps, dashboards, and the Power BI Service itself, navigation can quickly become confusing. A poor navigation experience doesn’t just frustrate users - it hurts adoption and reduces trust in your BI solution.

In this session, we’ll take a guided tour through the many ways you can improve navigation in Power BI:
• Inside reports: bookmarks, buttons, drill-through, page design, and layouts that guide users naturally
• Across apps: building intuitive app navigation, structuring content, and using dashboards where they still make sense
• In the Service: optimizing the Power BI portal, integrating with Microsoft Teams, and making apps easier to discover

You’ll see practical examples and design patterns you can apply immediately to help users feel less like hitchhikers in the Power BI universe - and more like confident pilots.

Next Step Power BI Semantic Model Development

Power BI Desktop is a fantastic tool for getting started with data modeling - but if you want to take the next step as a Power BI professional, you need to move beyond its limitations.

This full-day training will teach you how to use Tabular Editor 2 (free and open-source) as your primary development environment for Power BI semantic models. You’ll learn how separating metadata from data opens the door to advanced modeling patterns, productivity boosts, source control, and automation.

Through a series of demos and hands-on exercises, we’ll build a semantic model step by step, while introducing advanced concepts and best practices along the way.

Topics covered include:
• Why Tabular Editor? Benefits of metadata-driven development
• Productivity features: multi-select editing, scripting, Best Practice Analyzer
• Advanced modeling patterns: role-playing dimensions, many-to-many, calculation groups
• Perspectives and partitions for user-friendly and scalable models
• VertiPaq storage engine basics and optimization techniques
• Object-level security and governance considerations
• Deployment workflows and integration with source control/DevOps

By the end of the day, you’ll know:
• How to use Tabular Editor 2 for everyday development
• How to design higher-quality, more maintainable models
• How to integrate your modeling work into professional DevOps processes

This training is the perfect bridge for anyone who knows Power BI Desktop and wants to take the next step towards professional-grade semantic model development.

Take your semantic modeling skills beyond Power BI Desktop and learn how to build better, faster, and more maintainable models using the free and open-source Tabular Editor.

Architectural blueprints for the Modern Data Warehouse

This session will walk you through five different ways to set up an affordable Azure based Data Warehouse solution. Covering pros and cons with each of the architectures.

Classic solutions with source system in the one end and the reporting in the other end. But what do we put in the middle? What are the available services to extract, transform and load our data? And how about orchestration and monitoring?

Does it matter if we choose schema on read or schema on write? What are the drawbacks and benefits?

Special focus on reusing the competencies we have already acquired from making the same solutions on SQL Server.

Paginated Reports in Power BI

Paginated Reports have moved in with Power BI in the cloud - get ready to the big wedding! Analytical interactive reports are now side-by-side with the good old pixel perfect print friendly paginated reports.

Join this introduction session and learn all about the strengths and weaknesses of Paginated Reports and why you should pay attention and learn about this “new” type of report in the Power BI Service.

We will walk through the concepts behind and create a report from scratch all the way to publish. Also covering how you can test and use the feature - even without Premium. Finally you will get a glimpse into the future on what additional features we can expect to come - both in the near future and on the longer run.

XMLA endpoints, Power BI open-platform connectivity

Did you hear it? Power BI is merging with Azure Analysis Services! Microsoft is now opening up and giving us full access to the powerful Analysis Services engine in the Power BI service. This gives us semantic models that serve as the single source of truth for the whole enterprise. Manage the models (datasets) with the tools you know and love and give read access to a variety of Microsoft and third-party client tools.

This session will give you the complete overview, show you how to get started, great demoes of currently working client tools and introduction to what is coming next.

The traditional modern Data Warehouse

Big data, Databricks and now also Synapse Analytics. Microsoft really focuses on how to put together BI and DWH solutions that can handle huge volumes of data. But what about the "ordinary" solutions in the SME market? How do we put together a sensible and affordable Azure solution for them? Where we can reuse the competencies we have already built from making the same solutions on SQL Server.

This session will go through and show you how to setup a very simple ETL framework based on Data Factory, Data Lake, SQL DB and Power BI. All code will be made available for you to download and use!

Power BI: From Self-Service to Enterprise

Power BI started out as a set of Self-Service BI tools in Excel before the merging into Power BI Desktop a couple of years ago. At the same time Power BI has evolved into a grownup scalable Enterprise BI platform. But how do you master to grow a solution from self-service to something that is scalable, managed and governed? A solution that can be trusted and used in the whole enterprise! Basically promoting a quickly made proof-of-concept project - but without redoing the whole thing.

In this demo heavy session we will take a look at the different steps you have to master, so you can make a successful ownership transfer of the different component in your Power BI solution. It be the data mashup, data modelling, report creation and report distribution. We will start with one Power BI Desktop file containing it all and end with a solution that is split up in Dataflows, Tabular model, Reports and Apps. Using the fact that Power BI eventually came out of a set of different tools and technologies. We will end up looking at different ways to certify and brand the datasets, reports and apps. So your users can distingues what is still self-service and what is enterprise.

Power BI Quiz

The Power BI Quiz show is your chance to prove and test your knowledge on Power BI and maybe win the grand prize! Or be the lucky winner of the raffle prize(s).

It’s fun and you may actually get some learning out of it 😁

From Import to Direct Lake: Choosing the Right Storage Mode in Power BI

Storage mode decisions in Power BI can make or break a project. Import delivers blazing-fast performance but can be limited by memory. DirectQuery keeps data fresh but can slow things down. Composite and hybrid models offer flexibility but add complexity. And now, Direct Lake is changing the game.

In this session, we’ll compare each storage mode in detail, explain common pitfalls, and provide a clear decision framework for choosing the right one in different scenarios.

By the end, you’ll understand how to balance performance, freshness, and complexity to get the best results for your organization.

Advanced Semantic Model Development with Tabular Editor

Power BI Desktop is excellent for learning the basics of semantic model development. But if you want to truly master semantic modeling and automation, you’ll need to go beyond the surface. That’s where Tabular Editor comes in.

In this full-day workshop, we’ll take a deep dive into the Tabular Object Model (TOM) - the metadata structure behind every Analysis Services Tabular Model and every Power BI/Fabric semantic model. With Tabular Editor (free or paid), you’ll gain unrestricted access to TOM and learn how to manipulate it programmatically using C# scripts, PowerShell, and the CLI.

You’ll uncover objects and properties you may not even know exist, explore automation scenarios, and learn how to integrate semantic model development into professional DevOps workflows.

Topics covered include:
• Introduction to AMO and TOM
• Navigating and exploring TOM with Tabular Editor
• Programmatic access with C# scripts and PowerShell
• Understanding and applying TMSL (Tabular Model Scripting Language)
• Advanced TOM objects and properties
• Using the Best Practice Analyzer and building custom rules
• Automating workflows with the Tabular Editor CLI
• Practical use cases and advanced management scenarios

Learning objectives:
By the end of this workshop, you will:
• Have a solid understanding of TOM and its advanced objects/properties
• Know how to work with TOM using Tabular Editor, C#, PowerShell, and CLI
• Be equipped with productivity tips and tricks for advanced modeling
• Understand automation scenarios for DevOps and deployment

Prerequisites:
Attendees should already be familiar with basic semantic model concepts (tables, columns, measures, relationships) and have basic experience navigating Tabular Editor (free or paid). Please install Tabular Editor on your machine ahead of the workshop for hands-on exercises.

Delivered with great success on Power BI Next Step 2024

Deep Dive: Unified Semantic Models & KPI Reporting

It’s familiar: you’ve got distinct models for Finance, Sales, Operations, maybe Marketing. They’re well designed, maintained independently. And then leadership asks: “Just show me all the KPIs in one place.”

When you try to “force” everything into one model or stitch with dashboards, you run into challenges: duplicated logic, model bloat, refresh dependencies, permissions complexity, and performance bottlenecks. But there is a middle ground - a pattern that brings together only what’s needed from each existing model into a unified view.

In this 90-minute deep dive, we’ll combine strategy with hands-on execution:

Part I – Strategy & trade-offs

• Why a “mega model” often fails (model complexity, governance, maintenance)
• Why dashboards and scorecards are easy but limited
• The “unified view” pattern: when it makes sense, when it doesn’t
• Key design principles: reuse logic, minimize duplication, maintain independence of source models

Part II – Implementation patterns (demo + recipe)

Lightweight version - built entirely in Power BI Desktop:
 • Import queries or tables
 • Bring in aggregated data from source models
 • Reuse existing measure logic where possible
 • Keep your unified model slim and manageable

Metadata-driven version - scalable, maintainable, leveraging Fabric notebooks:
 • Automating extraction of queries/results from source models
 • Generating unified tables and measures programmatically
 • Handling refresh sequencing and dependencies
 • Applying governance and versioning

You’ll see working demos and come away with a pattern you can adopt immediately. You’ll also gain the insight to decide when to build a unified model - and when dashboards or scorecards are the better option.

Experts Live Denmark 2026 Sessionize Event Upcoming

February 2026 Copenhagen, Denmark

Fabric February 2026 Sessionize Event Upcoming

February 2026 Oslo, Norway

dataMinds Connect 2025 Sessionize Event

October 2025 Mechelen, Belgium

Data Saturday & Fabric Friday Holland 2025 Sessionize Event

October 2025 Utrecht, The Netherlands

European Microsoft Fabric Community Conference 2025 Sessionize Event

September 2025 Vienna, Austria

Data Saturday Oslo 2025 Sessionize Event

August 2025 Oslo, Norway

SQLBits 2025 - General Sessions Sessionize Event

June 2025 London, United Kingdom

SQLDay 2025 Sessionize Event

May 2025 Wrocław, Poland

Microsoft Fabric Community Conference Sessionize Event

March 2025 Las Vegas, Nevada, United States

Data Saturday #49 - Denmark - 2025 Sessionize Event

February 2025 Kongens Lyngby, Denmark

Fabric February 2025 Sessionize Event

February 2025 Oslo, Norway

Data Saturday & Fabric Friday Holland 2024 Sessionize Event

October 2024 Utrecht, The Netherlands

European Microsoft Fabric Community Conference Sessionize Event

September 2024 Stockholm, Sweden

SQLBits 2024 - General Sessions Sessionize Event

March 2024 Farnborough, United Kingdom

Fabric February 2024 Sessionize Event

February 2024 Oslo, Norway

Data Community Austria Day 2024 Sessionize Event

January 2024 Vienna, Austria

dataMinds Connect 2023 Sessionize Event

October 2023 Mechelen, Belgium

Data Saturday Denmark - 2023 Sessionize Event

March 2023 Kongens Lyngby, Denmark

SQLBits 2023 - Full day training sessions Sessionize Event

March 2023 Newport, United Kingdom

PASS Data Community Summit

Power BI: From Self-Service to Enterprise

November 2022 Seattle, Washington, United States

dataMinds Connect 2022 Sessionize Event

October 2022 Mechelen, Belgium

DataSaturday Croatia 2022 Sessionize Event

June 2022 Zagreb, Croatia

SQLDay 2022 Sessionize Event

May 2022 Wrocław, Poland

Power BI Community Tour (3 days)

Road trip with focus on new Power BI users

April 2022 Copenhagen, Denmark

Power BI Gebruikersdag 2022 Sessionize Event

March 2022 Utrecht, The Netherlands

SQLBits 2022 Sessionize Event

March 2022 London, United Kingdom

Power BI Fest Sessionize Event

November 2021

Data Céilí 2021 Sessionize Event

May 2021

#DataWeekender v3.1 Sessionize Event

May 2021

Power BI Summit Sessionize Event

April 2021

#DataWeekender #TheSQL Sessionize Event

October 2020

SQLBits

Power BI: From Self-Service to Enterprise

September 2020

Sydney Power BI User Group

The Hitchhiker's Guide to navigating Power BI

September 2020

Power BI Gebruikersdag

The Hitchhiker’s Guide to navigating Power BI

September 2020

SQLSaturday 917

Enterprise and Self-service in Power BI - better together

January 2020 Vienna, Austria

SQLSaturday 910

Customer story: End-to-end Microsoft BI solution in Azure

December 2019 Ljubljana, Slovenia

Power Platform World Tour Brussels

Paginated Reports in Power BI

December 2019 Brussels, Belgium

dataMinds Connect 2019 Sessionize Event

October 2019 Mechelen, Belgium

Data Saturday Holland Sessionize Event

October 2019 Utrecht, The Netherlands

Power Platform World Tour Copenhagen

Paginated Reports in Power BI

September 2019 Copenhagen, Denmark

Power BI Balooza

Enterprise and Self-service in Power BI - better together

June 2019 Atlanta, Georgia, United States

Data in Devon 2019 Sessionize Event

April 2019 Exeter, United Kingdom

Power Platform Summit Europe

Enterprise and Self-service in Power BI - Better Together

March 2019 Amsterdam, The Netherlands

SQLSaturday #816 Iceland

Deploying Power BI in the Enterprise

March 2019 Reykjavík, Iceland

SQLSaturday #765 Denmark

End-to-end Business Intelligence solution in Azure

October 2018 Copenhagen, Denmark

Power BI World Tour Copenhagen

Power BI Embedded

September 2018 Copenhagen, Denmark

Intelligent Cloud Conference 2018 Sessionize Event

May 2018 Copenhagen, Denmark

Just Blindbæk

Microsoft BI architect, trainer, speaker and MVP | twoday

Århus, Denmark

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top