Session

Optimizing Semantic Model Size and Refresh in Fabric Capacities

Fabric capacities are powerful, but memory is always the real bottleneck. Whether it’s a model that grows too large or a refresh process that consumes too much at once, understanding how to optimize for memory is key to getting the most out of your capacity.

In this deep dive session, we’ll go hands-on with practical demos that show you how to:

1. Optimize semantic model size
- Analyze memory consumption with VertiPaq Analyzer
- Apply automatic and custom aggregations
- Understand SKU memory limits and how to design within them

2. Optimize memory during refresh
- Implement incremental refresh patterns effectively
- Use hybrid tables to balance freshness and performance
- Take advantage of Direct Lake mode where it makes sense
- Apply custom partitioning for more control
- Leverage refresh scale-out - a hidden gem that uses read replicas to reduce refresh pressure on your main capacity

This session is packed with practical examples, live demos, and real-world tips. You’ll leave with a toolkit of techniques to help you fit more into your capacity, run refreshes more efficiently, and deliver better-performing models.

Just Blindbæk

Microsoft BI architect, trainer, speaker and MVP

Århus, Denmark

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top