Kamil Nowinski

Information & Communications Technology

SQL Server Integration Services SQL Server Data Tools Azure Data Factory Azure SQL DW DevOps & Automation Microsoft Azure DevOps Azure Synapse Microsoft SQL Server

London, United Kingdom

Kamil Nowinski

Altius, Principal Microsoft Consultant, Data Platform MVP

Blogger, speaker, #sqlfamily member. Data passionate, Data engineer and Architect.
Over 15 years of programming and experience with SQL Server databases (since 2000 version) he confirmed by certificates MCITP, MCP, MCTS, MCSA, MCSE Data Platform & Data management & analytics. He worked both as developer and administrator big databases designing systems from the scratch. Passionate about tuning database engines, code transparency and maximising performance database engine.
He ended the work of the architect in a project of building a data warehouse for the Ministry of finance in the e-duty program three years ago.
Currently, he is expanding new horizons as a contractor in the UK market, making troubleshooting, prototyping BI solutions, enhancing existing processes among Microsoft stuff environments, popularizing the DevOps approach and efficient solutions in the cloud.
He loves self-development, automation, explore new technology and share his knowledge with everybody who wants to.
From many years tied with Data Community Poland (former PLSSUG), between 2012 and 2018 acted as a Member of the Audit Committee. He worked a couple years as a volunteer and now as a co-organizer and speaker of the biggest SQL Server conference in Poland (SQLDay).
An originator of the "Ask SQL Family" podcast and founder of SQLPlayer blog.
Privately happy husband and father of two wonderful girls.

Current sessions

Move part of your body to Azure Data Warehouse

Azure is cheaper, Azure is faster, Azure is more secure. Azure... everywhere is azure. Everywhere is data.
Even if not today, certainly in the future (yes, believe me) you will face a case: how to move my data from premise Data Warehouse to Azure.
This session will reveal the ideas how to do that and compare those methods. I will describe potential issues and give you hints on how to avoid them.
Finally, we will see what speed we can achieve during a migration.


Azure Data Factory v2 with Data Flows capabilities

Microsoft's services in Azure helps us to leverage big data more easily and even more often accessible for non-technical users. Having UI in ADF version 2 - Microsoft added a new feature: Data Flow which resembles components of SSIS. This is a very user-friendly and non-code approach tool-set.
But, has that been only UI introduction? Why and how Databricks does work under the hood?
Do you want to know this new (still in private preview) feature of ADF and reveal the power of modern big data processes without knowledge of such languages like Python or Scala?
We will review this new feature of ADFv2, do deep dive to understand the mentioned techniques, compare them to SSIS and/or T-SQL and learn how modelled data flow runs Scala behind the scenes.


Data replication - who with whom, for whom, why and for what?

During this session we will review all types of replications in SQL Server and find out principal differences between them and when should be applied. Besides examples of practical scenarios, we will consider what we have as alternatives and why the AlwaysOn is not one of them. As always on my sessions quite a lot of demonstrations and T-SQL code.


Understand better Transaction Isolation Levels

SQL Server is an extraordinarily powerful engine of relational databases, which lets you achieve high scalability of data platform. For many years SQL Server gains more and more new features and more efficient mechanisms including InMemory or ColumnStore Indexes. However, there is still many companies not using those features and struggling with performance issues, which the root cause turn out the problems with concurrency.
Let's back to the basics in order to better understand transaction isolation levels available in SQL Server. On this session we will learn about a concurrency issue, (not)expected behaviours, lost modifications and consider how to cope with them. I will tell what the optimistic and pessimistic concurrency models are, when to use it and what tempdb has in common with them. Also, we will see in practice how dangerous (NOLOCK) might be which is being used so passionately by developers.


Software Development Life Cycle for databases as a part of nowadays DevOps (pre-conf)

Nowadays the DevOps is the topic number one in many industries and companies. However, in some case, you see that the code repository is or will be the most important topic. The number of tasks could be overwhelming at first glance, however, there is no other way - you have to use the new tools and solutions
The code repository term is not really new but the way it is integrated with the database world is still sometimes questioned. And there are a lot more than that just naming Continuous integration, Continuous Delivery or Continuous Deployment.
We would like to show you tools to efficiently manage database projects (SQL Server Data Tools), how to start working with projects, how to deal with problems you will probably see during daily operations and how to configure and manage the projects' deployment process on different environments.

We will go to the Software Development Life Cycle (SDLC) process in great details from the database point of view. But we will not spend too much time on analysis but rather on the development part. We would like to show the usage of the Octopus application.
Of course, there will be an entire module about best practices and how to efficiently use them in the database projects.
In the end, we would like to touch the cloud and show how to migrate the existing on-premises database to the Microsoft Azure SQL Database and how no to get into troubles.
After attending the workshop you will be able to do & know:
* create empty or import existing databases
* resolve various problems during the import of database
* manage database project and its objects
* handling CLR objects
* store data in a project (static data, dictionary, etc.)
* what should be a part of a project and what shouldn't (Linked Servers, security)
* where and how to keep SQL Jobs
* split database project into more chunks and why it's required sometimes
* cope with unlimited amount of projects
* Avoid known issues like: temp tables, triggers, circular references, OPENQUERY, lack of validation
* Migrate project to Microsoft Azure (Cloud!)
* Use a hybrid approach
* Apply tSQLt Unit Tests
* Make a deployment manually and automatically (Octopus)
* Distinguish (finally) all three types of "Continuous"
* some helpful PowerShell scripts

We are going to show you commercial tools as well as some tips& tricks as well.

Requirements:
Basic TSQL knowledge
Basic Visual Studio knowledge

The workshop will be done in the Visual Studio 2017 with the newest SSDT installed but you can use older version of the Visual Studio as well
You can take your laptop with you as most of the task we are going to do with you!
You will have access to all codes and slide deck.

Trainers:
Kamil Nowiński @NowinskiK
http://SQLPlayer.net (blog)


Databases with SSDT: Deployment in CI/CD process with Azure DevOps

When working on a database in SSDT, there is a need to deploy our changes to further environments and at the same time maintain the consistency of databases between environments. During the session, I will present how we can publish the solution manually and then go to the Continuous Integration and Continuous Deployment process using the Azure DevOps environment (formerly VSTS). In addition, we will work on inserting the unit tests, approval steps and the others using Pester and PowerShell in order to gain full automation in our database deployment process.
This session is a continuation of my last year's session, which is why I assume that the students know the basics of SSDT.

SSDT allows you to import and maintenance a database project within Visual Studio. Prepare a few steps more to test and deploy changes and data into target SQL Server with Azure DevOps pipelines.


When and how to migrate to Azure Synapse?

I'm very glad that Microsoft changed the name. Azure Synapse is Azure SQL Data Warehouse evolved.
Moreover: Azure Synapse is much more than a single service now.
During the session, I will explain what's that and why migration to this MPP architecture is not as simple as "copy-paste" of dataset. Therefore, you might be wondering: "how to move my data from on-premise Data Warehouse to Azure Synapse?".
This session reveals the ideas on how to do that, how to recognize whether your company is ready for this move or not really yet? What the best practices are and what other tools does Azure Synapse offer to leverage?
The session is for everyone who knows the data warehousing concept and wants to broad theirs horizons with modern capabilities of processing the data at scale.

The session is for everyone who knows the data warehousing concept and wants to broad theirs horizons with modern capabilities of processing the data at scale.


Transform existing SSIS packages into pipelines in Azure Data Factory

Most of the ETL developers know Integration Services very well.
When it comes to Azure - there are several services available, but Azure Data Factory (ADF) looks the most promising.
Having many years of experience with SSIS you might wonder: how to just lift and shift the workload to Azure Data Factory?
What's that? How to use it? How to run existing packages in the cloud?
In which scenario Azure does help and how to migrate those SSIS packages to ADF?
This session is for ETL/SSIS developers to help them understand what ADF is and when/how to implement current SSIS data flow with it.

This session is for ETL/SSIS developers to help them understand what ADF is and when/how to implement current SSIS data flow with it.


Azure Databricks 101

Many sources? Various format? Unstructured data? Big Data? You might think that these only a buzz words. Not really. These days it's a part of modern data flow architecture. No matter what do you use - SQL Server, Cosmos DB, Azure SQL DW, Azure Data Factory, Data Lake... somewhere there you can find Databricks. So, the question is: what the Azure Databricks is and which scenario it could be used in?
Use Databricks to analyse large DataSets at scale, write Python, Scala or SQL command in one notebook to ingest, process and push the data to the required target. Use Databricks' notebook as a part of Azure Data Factory pipeline. We also will try to answer whether Databricks would replace SSIS as a modern ETL/ELT process?
If you are wondering about all these things - you should join me in this session.

Azure Databricks for the beginners where we will try to understand in which scenarios the notebooks and Spark cluster can be leverage and helpful.


Lightning Talk: Reference/master data for database project

By default, SSDT (SQL Server Data Tools) does not offer capabilities for deploying data of server-level objects. In this talk, I will show you how quickly fill that gap and generate a script with INSERT/MERGE statements in it.


Lightning Talk: Cosmos DB - when yes and when not?

Azure Cosmos DB offers single-digit-millisecond data access to NoSQL database. But what that does mean and when exactly we should use it? We will go through a few scenarios where Cosmos DB suits very well and an example when completely doesn't.


Past and future events

SQLDay 2020

11 May 2020 - 13 May 2020
Wrocław, Poland

SQLSaturday Slovenia

14 Dec 2019
Ljubljana, Slovenia

SQL Saturday #926 Lisbon

30 Nov 2019
Lisbon, Portugal

Data Relay 2019

7 Oct 2019 - 11 Oct 2019

SQL Saturday #904 Madrid

28 Sep 2019
Madrid, Spain

SQL Saturday #898 Gothenburg

14 Sep 2019
Göteborg, Sweden

SQL Saturday #857 Kyiv

18 May 2019
Kyiv, Ukraine

SQLDay 2019

13 May 2019 - 15 May 2019
Wrocław, Poland

Data in Devon 2019

26 Apr 2019
Exeter, United Kingdom

SQLBits 2019

27 Feb 2019 - 2 Mar 2019
Manchester, United Kingdom

SQL Saturday #829 Pordenone

23 Feb 2019
Pordenone, Italy