Paul Andrew

Information & Communications Technology

Azure Data Platform

Derby, England, United Kingdom

Paul Andrew

Principal Consultant & Solution Architect, Data Platform MVP

Principal consultant and architect specialising in big data solutions on the Microsoft Azure cloud platform.
Data engineering competencies include Azure Data Factory, Data Lake, Databricks, Stream Analytics, Event Hub, IoT Hub, Functions, Automation, Logic Apps and of course the complete SQL Server business intelligence stack.
Many years’ experience working within healthcare, retail and gaming verticals delivering analytics using industry leading methods and technical design patterns.
STEM ambassador and very active member of the data platform community delivering training and technical sessions at conferences both nationally and internationally.
Father, husband, swimmer, cyclist, runner, blood donor, geek, Lego and Star Wars fan!

Current sessions

Creating a Metadata Driven Orchestration Framework Using Azure Integration Pipelines

Azure Data Factory is the undisputed PaaS resource within the Microsoft Cloud Platform for orchestrating our data workloads. With a growing set of 100+ Linked Service connections, combined with an array of control flow and data flow Activities there isn't much Data Factory can't do in terms of solution delivery. That said, the service may still require the support of other Azure resources for the purposes of logging, compute and storage. In this session we will focus on exactly that point by extending our Data Factory with a SQL Database and an Azure Functions App. The result is the ability to create a dynamic, flexible, metadata driven processing framework that complements our existing solution orchestrator. Furthermore, we will explore how to bootstrap multiple Data Factory's, design for cost with nearly free Consumption Plans and deliver an operational abstraction over our Data Factory worker pipelines without hiding any error details in layers of dynamic JSON. In addition, we'll explore how this framework now translates to Azure Synapse Orchestration pipelines.


An Introduction to Azure Synapse Analytics - What is it? Why use it? And how?

The Microsoft abstraction machine is at it again with this latest veneer over what we had come to understand as the 'modern data warehouse'. Or is it?! When creating an Azure PaaS data platform/analytics solution we would typically use a set of core Azure services; Data Factory, Data Lake, Databricks and SQL Data Warehouse. Now with the latest round of enhancements from the MPP team it seems in the third generation of the Azure SQLDW offering we can access all our core services as a bundle. Ok, so what? Well, this is a reasonable starting point to in our understanding of Azure Synapse, but is also far from the whole story. In this session we'll go deeper into the evolution of our SQLDW to complete our knowledge on why Synapse Analytics is a game changer for various data warehouse architectures. We'll discover what Synapse has to offer with its Data Virtualization layer, flexible storage and multi model compute engines. A simple veneer of things, this new resource is not. In this introduction to Synapse we'll cover the what, that why and importantly the how for this emerging bundle of exciting services.


Building an Azure Data Analytics Platform End to End

The resources on offer in Azure are constantly changing, which means as data professionals we need to constantly change too. Updating knowledge and learning new skills. No longer can we rely on products matured over a decade to deliver all our solution requirements. Today, data platform architectures designed in Azure with best intentions and known good practices can go out of date within months. That said, is there now a set of core components we can utilise in the Microsoft cloud to ingest and deliver insights from our data? When does ETL become ELT? When is IaaS better than PaaS? Do we need to consider scaling up or scaling out? And should we start making cost the primary factor for choosing certain technologies? In this session we'll explore the answers to all these questions and more from an architects viewpoint. Based on real world experience lets think about just how far the breadth of our knowledge now needs to reach when starting from nothing and building a complete Microsoft Azure Data Platform solution.


Implementing Azure Data Factory in Production

If you have already mastered the basics of Azure Data Factory (ADF) and you are now looking to advance your knowledge of the resource, this is the session for you. Yes, Data Factory can handle the orchestration of our ETL pipelines. But what about our wider Azure environment? In this session we will take a deeper dive into the service, considering how to build custom activities, create metadata driven dynamic pipelines and think about hierarchical design patterns. Plus, explore ways for optimizing our Azure compute costs by controlling other resource scaling as part of our normal data processing pipelines. How? Well, once we can hit a REST API from an ADF web activity anything is possible, extending our Data Factory and orchestrating everything in any data platform solution. All this and more in a series of short lessons (based on real world experience) I will take you through how to deploy Azure Data Factory in production and apply best practice.


Past and future events

Data Relay 2019

7 Oct - 11 Oct 2019

DATA:Scotland 2019

12 Sep 2019
Glasgow, Scotland, United Kingdom

DataGrillen 2019

19 Jun - 20 Jun 2019
Lingen, Lower Saxony, Germany

Global Azure Bootcamp 2019

27 Apr 2019
Birmingham, England, United Kingdom

Intelligent Cloud Conference 2019

7 Apr - 9 Apr 2019
Copenhagen, Capital Region, Denmark

Global Azure Boot camp - Birmingham UK

21 Apr 2018
Birmingham, England, United Kingdom