John Martin

Information & Communications Technology

SQL Sever Microsoft SQL Server DevOps Automation sql server Terraform DevOps Skills DevOps & Automation Professional Development AWS RDS Information Security AWS Architect Database Design AWS Architecture Amazon Redshift Amazon Aurora Amazon Athena

Bournemouth, England, United Kingdom

John Martin

Data Platform and Cloud specialist, Chartered IT Professional.

John is an experienced data platform professional having spent over a decade working with Several data and cloud platform technologies.

Currently specialising in Amazon Redshift but also with a long history in Microsoft SQL Server wherever it is deployed as well as key Amazon data platform technologies including Amazon Aurora, and Amazon Athena.

Throughout his career, John has learned how to get the most out of these platforms as well as the key pitfalls that should be avoided.

Current sessions

Data Platform Security, a holistic approach

It is very much a case of when, not if, that we will need to deal with cybersecurity issues as a data professional. This session is focused on looking at how we can only secure a database when we look at it in the context of the larger system it sits within and why it is important to have a broader view to succeed.

This session is going to focus more on key concepts, patterns, practices, and processes that you can then apply to your environments rather than technology. This way you can apply them whether they are on-premises or in the cloud.

Patterns & Practices for loading & querying data with Amazon Redshift

Amazon Redshift lets us use SQL to analyse structured and semi-structured data across data warehouses, operational databases, and data lakes. There are several ways that the data can be loaded or accessed and it is important to use the right patterns to achieve our performance goals.

This session will look at the options for loading data directly to Amazon Redshift as well the how to access external data. We will also look at how to build queries that unify these data assets to provide insights to our users.

How to Run Successful Proof of Concepts (POCs)

Being able to rapidly, and successfully evaluate a technology, technique, or high level architecture is important in today's rapidly evolving technology landscape. But just what does this involve?

In this session we will look first at defining what a Proof of Concept (POC) is, and how it differs from a prototype or similar activity. Then we will dive into everything that goes around a POC to make sure that it is scoped, prepared, executed, and summarised effectively. The end result being that a POC will not run on for weeks or months, but rather, is delivered in a matter of days.

By the end of this session you will be in a position to run your own POCs irrespective of technology or platform.

Getting started with Amazon Redshift

Amazon Redshift is a fully managed data warehouse platform which allows us to concentrate on the high value activities around analysing our data assets. But, how do we get started using Amazon Redshift to effectively deliver insight to our colleagues and customers?

This training day will focus on giving you the knowledge needed to rapidly deploy Amazon Redshift, ingest data, and start performing analysis to deliver insight and value to your users. The agenda for the day is as follows.

- Introduction (what is Amazon Redshift & reference architectures).
- Deploying an Amazon Redshift cluster.
- Core database design concepts for Amazon Redshift.
- Loading data to Amazon Redshift.
- Querying data in Amazon Redshift.
- Operating Amazon Redshift (monitoring, optimisation, maintenance).

This training day is designed for those who understand the core concepts of data warehousing, cloud infrastructure, and limited to no experience of Amazon Redshift.

Analysing structured and semi-structured data with Amazon Redshift

Modern data architectures contain many silo's of data which need to be brought together to allow for effective analysis. Amazon Redshift has the ability to access structured data stores in PostgreSQL or MySQL and semi-structured sources including Parquet, ORC, or AVRO in Amazon S3.

This session will provide you with the ability to break down data silos and build a unified view of business data to provide a unified view to analyse all your data across operational databases, data lakes, data warehouses, and third-party data sets.

An introduction to Amazon Athena interactive query service

Amazon Athena is a serverless interactive query service which makes it easy to analyse data over multiple data stores. Allowing you to gain insight from data stored Amazon S3, as well as many relational and non-relational technologies using ANSI SQL.

This talk will focus on providing you with the knowledge to get started with Amazon Athena to query multiple data sources quickly and easily.

An Introduction to Amazon Redshift cloud data warehouse

Large-scale data analytics has traditionally meant integrating multiple technologies to deliver results. Amazon Redshift changes this by providing a Massively Parallel Processing (MPP) engine which spans your data lake, relational, and non-relational stores. Providing the real-time operational analytics, allowing you to build insight-driven reports and dashboards and deliver analytics as a service to your users.

This talk will help you understand how Amazon Redshift works as well as how to use it to deliver operational analytics up to 10x faster and 3x price performance than other enterprise data warehouses.

Getting Started with Amazon RDS for SQL Server

Amazon RDS for SQL Server makes it easy to set up, operate, and scale SQL Server deployments in the cloud. As a fully managed service you no longer need to worry about hardware provisioning, software patching, or backups. All while being able to enable high availability quickly and easily if you want it.

This talk will focus on the key concepts of Amazon RDS for SQL Server and how to deploy database resources quickly and efficiently. Giving you the core knowledge to get started with SQL Server in the AWS Cloud.

Building High Performing Teams

The technology landscape has changed dramatically in the last few years. Whether it is the adoption of cloud, advances in technologies, or new working practices. This poses new challenges for recruiting and retaining teams to deliver value to our businesses or customers.

In this session we will talk through some of the key points that should be considered when recruiting, interviewing, and managing teams. This includes.
- Defining a role-spec
- Finding candidates
- Interview process
- Team culture and language
- Managing distributed teams

By the end of this session you will have a new perspective on how to go about recruiting diverse teams of high performing people who can achieve great things.

Data Platform Security Fundamentals

Currently it is a case of when, not if, we will have to deal with a data breach or cyber-attack. Given this situation it is important to understand how we can minimise the blast radius and impact of the event.

Together we will work through the core fundamentals of how to approach security for our data platform systems. Not just infrastructure, not just the database, but how we can use the different layers to buy us time to detect an attack and respond.

Key topics covered will be.
- Zero Trust and secure by design mindset.
- Authentication and Authorisation
- Infrastructure elements
- Data security options
- Availability and Recoverability

While not providing you with a silver bullet to solve all your problems. This session will provide you with the foundations needed to ask the right questions and the right approach to help improve your security posture.

Data Virtualisation - Unlock instant insights with Polybase and SQL Server

Data virtualisation is a technology and technique which is helping change the way that we look at integrating systems, performing ETL, and controlling access to data. With the introduction of Polybase into SQL Server 2019 this has now brought this capability for accessing external data sources to a much wider audience, but how can we use this and what do we need to know to get up and running?

Join me as we discuss the use-cases and deployment scenarios for a data virtualisation architecture using Polybase. I will also demonstrate how we can use Polybase to streamline, and simplify, the integration of Oracle Database, Hadoop, and MongoDB data sources into a single reporting database on SQL Server without a traditional ETL platform.

Terraform - up and running

When working with cloud platforms we really ought to be taking an Infrastructure as Code (IaC) first approach. Meaning, we should be defining our resources as code for repeatable, reliable, deployment. Each cloud vendor has their own flavour of IaC, but what if we work with multiple cloud providers or hybrid environments?

Terraform is an IaC option which provides a cloud agnostic syntax for defining resources for all the major cloud vendors, VMware, and many other SaaS and software systems. What does this mean? Well, it means that we can standardise on one tool to work with for cloud specific resource definitions, speeding up our time to value and delivering results.

In this session we will look at all of the elements needed to get up and running with Terraform. We will look at the key elements of the Terraform syntax, software components, and development environment. Then we will dive in to look at how to create a Terraform project to deploy Amazon RDS for SQL Server ready for us to use. This will cover how to declare variables, resource definitions, use of locals, implicit and explicit dependencies and outputs.

By the end of this session you will be able to implement basic Terraform projects to deploy resources for your environments with the appropriate provider.

Past and future events

SQLDay 2022

9 May 2022 - 11 May 2022
Wrocław, Lower Silesia, Poland


8 Mar 2022 - 12 Mar 2022
London, England, United Kingdom

Data Saturday Southwest US

15 May 2021