Speaker

Luke Moloney

Luke Moloney

Fabric @ Microsoft

Dublin, Ireland

Actions

Luke works in the Microsoft Fabric Strategic Accounts and ISV team based out of Dublin. Working with customers and ISVs building platforms with Fabric. Previous to Microsoft, he worked for Aginic - an Analytics consultancy in Australia implementing Azure oriented data solutions. Outside of technology you can find him watching F1 or Rugby

Area of Expertise

  • Information & Communications Technology

Topics

  • SQL
  • Microsoft SQL Server
  • Azure SQL Synapse
  • SQL Server Analysis Services
  • Azure SQL Database
  • Azure Data Factory

Using Fabric real time intelligence for Spark Logs

In this session I will discuss how to use Fabric's Real Time Intelligence capabilities to ingest, analyse and action logs from Fabric Data Engineering / Spark.

This session will
1 - step through the set-up process
2 - walkthrough the different logging options
3 - ways to analyse the logs with KQL
4 - how to extend the logs to enable actions and alerts from the logs.

This session will be primarily demo-centric, and requires minimal knowledge of Spark/Data Engineering and/or Real-Time Intelligence.

Microsoft Fabric: Soaring with Data Engineering in the Clouds

Microsoft Fabric is an end-to-end, unified analytics platform that brings together all the data and analytics tools that organizations need. Prepare for a turbulence-free journey with Microsoft Fabric, which is designed to elevate your data engineering and analytical projects.

But what does it mean for Data Engineers?
Join us in this captivating demo-focused session tailored for data engineers, analytics engineers, and data warehousing professionals who aspire to reach new altitudes. In this session we will explore key capabilities such as; OneLake's capability to virtualize your existing data lake investments, a powerful yet simple Spark engine that allows you take to the skies near instantly, how Fabric's adoption of Delta allows you to use your favorite tools easily, how the unified SQL endpoint allows you to combine the best of Spark and Relational warehousing, and how the simplicity of Fabric doesn't mean that you lose the ability to optimize when you need to.

If you're currently navigating the data skies with Azure Synapse Analytics, Azure Databricks, Azure Data Factory, or similar technologies, this is a session that will take your career to new heights. Don't miss your chance to explore the boundless opportunities that Microsoft Fabric has to offer, and remember, with us, there's no "data-turbulence" in sight!

Flying High with Data Engineering in Microsoft Fabric

In this session demo centric session we will walk through Data Engineering in Microsoft including: Getting started with Notebooks, Learning how Fabric Lakehouse's provides flexibility to use the tools you prefer, The flexibility of data engineering in Spark - including Cluster Configuration, Environments and Library Management and finally, How Copilot enables all developers to be more efficient with Spark.

Fabric Spark at scale - tips, tricks and best practices

In this demo-centric session we will run through the tips, tricks and best practices when using Spark at scale.

In this session we will cover:
- Different ways to configure and manage your Spark Environments, including cluster sizing, libraries and configuration properties
- Tips for performance profiling and optimisation
- Different options for complex orchestration patterns that minimize cluster start-up time
- Using MSSparkUtils to the fullest to orchestrate end-to-end scenarios
- Ways to use an eventhouse to monitor your Spark Jobs and find performance regression.

This session is targeted at people who are either:
1 - looking to start a Fabric project in the near term with extensive use of Spark and Lakehouses; or
2 - those current using Fabric Spark who are wanting to take their skills to the next level.

Fabric Events - orchestrate and monitor with ease

The Real Time Hub in Fabric provides access to a rich sent of events from Azure sources, on-prem sources and even Fabric itself.

In this session we will discuss the Fabric events capability which enables greater visibility into what's happening in Fabric, and the ability to respond to those events.

In this session we will specifically discuss and demonstrate how to:
- how to create a simple audit record of workspaces events, OneLake file changes and Jobs
- how to connect these events to all the real-time intelligence capabilities
- how to use these events to resolve and alert when failures occur
- architectural considerations when configuring Fabric events to ensure resiliency
- options for alerting within Fabric and to external tools
- ways to automate the configuration of monitoring and alerting

This session will contain demonstration to step through the above configuration. A basic understanding of Fabric is beneficial.

Fabric for ISVs - Develop multi-tenancy apps on Fabric

Microsoft Fabric provides a unified data platform for customers to build solutions on - and this extends to ISVs as well. In this session we will discuss some of patterns, practices and considerations for ISVs building apps on Fabric.

These patterns, practices, and considerations are inspired by our collaboration with top ISVs across industries who are developing applications on Fabric.

As part of this we will look into two common use-cases
1) Modernizing SQL reporting solutions to Fabric
2) Lakehouse and Spark-centric data access solutions

In this session we will cover:
1 - options for customer isolation and tenancy considerations
2 - managing network access, authentication and authorization in common ISV solutions
3 - automation options for ISVs to enable scalable, repeatable and reliable deployments.
4 - logging and auditing concerns in an ISV scenario
5 - integration Apps built on Fabric with the Fabric Workload Extensibility SDK
6 - different options for enabling customer access to data from a range of applications

Fabric Data Engineering - Lessons from implementations

This session will focus on Data Engineering with Microsoft Fabric, and in particular bringing lessons from successful implementations.

In particular we will focus on:
- performance optimisations within different zones of a solution (bronze, silver and gold).
- how delta table features can be used to simplify processing
- different orchestration options - Airflow and Pipelines, and when to chose what
- how to leverage real time intelligence and data engineering for simpler orchestration.
- different options to minimize cluster start-up time, especially with custom environments
- workspace and capacity design considerations
- deployment approaches and tricks for notebooks and environments

Throughout this session there will be a number of demos to illustrate the above points.

This session assumes a reasonable degree of familiarity with Fabric and some familiarity with Spark.

Diving into the depths of OneLake!

By now you’ve probably heard lots about Microsoft Fabric, the latest and greatest SaaSified analytics solution from Microsoft. In a nutshell, Fabric enables you to work on a single copy of data using your favourite analytical engine. Whether you want to write good old SQL queries, do advanced data engineering using Spark notebooks or you just want to create a report in Power BI, Fabric has you covered. This is all possible through OneLake, the next chapter in the data lake storyline.

I hear you thinking “how does that work/perform?”, “does it integrate with other technologies?”, “what are some of the gotcha’s?” ... In this advanced session we’ll answer all these questions and instantly transform you into a OneLake expert! We will dive deep into the technology it’s built on, giving you a technical breakdown of the key features whilst also addressing some of the things you need to be aware of before plunging in!


Specifically, we will cover:

- Brief overview of the evolution of data lakes 
- Distinctive features that set OneLake apart, focusing on:
- file format
- shortcuts
- DirectLake
- Hands-on demo illustrating its usability and performance
- Current limitations and unsolved feature mysteries
- How to start working with OneLake

The session is designed for data engineers and data analysts who have a fundamental knowledge of Azure data services and want to get a deeper technical understanding of Microsoft Fabric’s OneLake.

Building on Fabric - how ISVs can take advantage of Fabric

Fabric provides a wide range of capabilities that software companies can use to develop apps with; ranging from data distribution apps, reporting applications on top of B2B SaaS offerings, through to using Fabric as a simplified backend.

This session will be grounded in the practicalities informed by Luke and Holly's experience working with ISV to build commercially successful applications on top of Fabric.

In this session we will cover:
- how to get started on your Fabric journey
- opportunities and varied workloads provided by Fabric for ISVs
- different ways to leverage OneLake and mirroring for customer experiences.
- understanding the APIs available for automation and embedding experiences
- multi-tenancy, segregation and isolation for managing different end-customer data
- necessary tooling to get the most out of Fabric as an ISV
- cost and manageability considerations to maximize value for your customers and to you an ISV

Analytics superhero in a day with Microsoft Fabric

Are you ready to become analytics superhero?

Hesitate no more! Microsoft Fabric is out and building analytics solution cannot get any easier! If you believe that the best way to learn is to learn by example, join us!

We will build a new green field analytical solution using new all-in-one Microsoft Fabric that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. You will enjoy a highly integrated, end-to-end, and easy-to-use product that is designed to simplify your analytics needs.

We will start off the day with an overview of Microsoft Fabric, set up the environment and explain the scenario for the day.

There is no use of analytics solution if there is no data. So, we will show how data is stored in Fabric, how you can integrate existing data, or ingest data from different sources. Here we will create landing zone and integrate existing data without making another copy!

Once data is in Microsoft Fabric OneLake, we will process it efficiently while demonstrating best practices. Data engineers are used to Spark so it will be our tool of choice in creating different layers while tackling all the things you need to know. During this part, you will learn how to incrementally load data and make sure your cleansing is done right and in an effective and dynamic way! Rest assured, SQL folks are not forgotten! We will show how you can access the same data using SQL!

Of course, there is no using loading data without a serving layer for your business needs. We will show you how you can use Lakehouse & Warehouse in Microsoft Fabric focusing on performance!

Finally, we will show you how to build Power BI report using new consumption method reading data directly from the lake, called DirectLake, skipping data engines.

If you do not know where to start, don’t worry, during this day we will explain every step and how Fabric helps you get the job done. Also, we will share the best practices seen at most customers!

After this day you will have a clear understanding of how Fabric works and what components you must use for you own top notch analytical solution!

Fabric is brand new, so instead of throwing terminology here, we are inviting you to join us and become the analytics superhero!

Luke Moloney

Fabric @ Microsoft

Dublin, Ireland

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top