Kamil Nowinski

Information & Communications Technology

SQL Server Integration Services SQL Server Data Tools Azure Data Factory Azure SQL DW DevOps & Automation Microsoft Azure DevOps Azure Synapse Microsoft SQL Server PowerShell

London, England, United Kingdom

Kamil Nowinski

Blogger, Speaker, Microsoft Data Platform MVP. Group Manager & Analytics Architect. MCSE Data Management and Analytics

Blogger, speaker, #sqlfamily member. Microsoft Data Platform MVP. Data passionate, Data Engineer and Architect.
Over 20 years of programming and experience with SQL Server databases (since 2000 version) he confirmed by certificates MCITP, MCP, MCTS, MCSA, MCSE Data Platform & Data management & analytics. He worked both as a developer and administrator of big databases designing systems from the scratch. Recently focused on Data Platform in Azure as a certified (Azure Dev-Ops Engineer Expert, Azure Developer Associate) Data Engineer and Azure Architect.
Passionate about optimization of database systems, an advocate of code transparency, open-source projects and automation, DevOps and PowerShell fan.

Since 2015 he has been living and working in the UK. Currently professionally associated with Avanade, an international consulting company.

For many years tied with Data Community Poland (former PLSSUG), between 2012-2018 acted as a Member of the Audit Committee. He worked a couple of years as a volunteer and now as a co-organizer and speaker of the biggest SQL Server conference in Poland (SQLDay).

An originator of the "Ask SQL Family" podcast and founder of SQLPlayer blog.
Privately happy husband and father of two wonderful girls.

Current sessions

Azure Data Factory - A deployment challenges

ADF is an important brick in the architecture of any modern data warehousing solution and many other scenarios.
As it exists for some time now and we know its capability pretty well, the deployment of the service is still something that leaves much to be desired, specifically in a bit more complex instances.
In this session, I will show a few challenges to publishing ADF and solution for them.

ADF Deployments with Azure DevOps

Azure Data Factory is a great orchestration tool in the cloud, is mature and for a while now with us.
Authoring the pipelines and other objects as a developer via browser (v2), working appropriately with branches, debug mode and understanding an integration with Git repo might be a bit tricky.
If you add to this, the need for deployment to different environments, adf_publish branch and why actually two methods of deployment exist - these things can be overwhelming.
Learn the best ways of working with ADF, scripts and tools for deployment and differences between them. See, how to automatically (not via UI) generate/export arm template files and use them in further steps in Azure DevOps, if you prefer using this way.

Azure Data Factory v2 with Data Flows capabilities

Microsoft's services in Azure helps us to leverage big data more easily and even more often accessible for non-technical users. Having UI in ADF version 2 - Microsoft added a new feature: Data Flow which resembles components of SSIS. This is a very user-friendly and non-code approach tool-set.
But, has that been only UI introduction? Why and how Databricks does work under the hood?
Do you want to know this new (still in private preview) feature of ADF and reveal the power of modern big data processes without knowledge of such languages like Python or Scala?
We will review this new feature of ADFv2, do deep dive to understand the mentioned techniques, compare them to SSIS and/or T-SQL and learn how modelled data flow runs Scala behind the scenes.

Understand better Transaction Isolation Levels

SQL Server is an extraordinarily powerful engine of relational databases, which lets you achieve high scalability of data platform. For many years SQL Server gains more and more new features and more efficient mechanisms including InMemory or ColumnStore Indexes. However, there is still many companies not using those features and struggling with performance issues, which the root cause turn out the problems with concurrency.
Let's back to the basics in order to better understand transaction isolation levels available in SQL Server. On this session we will learn about a concurrency issue, (not)expected behaviours, lost modifications and consider how to cope with them. I will tell what the optimistic and pessimistic concurrency models are, when to use it and what tempdb has in common with them. Also, we will see in practice how dangerous (NOLOCK) might be which is being used so passionately by developers.

Software Development Life Cycle for databases as a part of nowadays DevOps (pre-conf)

Nowadays the DevOps is the topic number one in many industries and companies. However, in some case, you see that the code repository is or will be the most important topic. The number of tasks could be overwhelming at first glance, however, there is no other way - you have to use the new tools and solutions
The code repository term is not really new but the way it is integrated with the database world is still sometimes questioned. And there are a lot more than that just naming Continuous integration, Continuous Delivery or Continuous Deployment.
We would like to show you tools to efficiently manage database projects (SQL Server Data Tools), how to start working with projects, how to deal with problems you will probably see during daily operations and how to configure and manage the projects' deployment process on different environments.

We will go to the Software Development Life Cycle (SDLC) process in great details from the database point of view. But we will not spend too much time on analysis but rather on the development part. We would like to show the usage of the Octopus application.
Of course, there will be an entire module about best practices and how to efficiently use them in the database projects.
In the end, we would like to touch the cloud and show how to migrate the existing on-premises database to the Microsoft Azure SQL Database and how no to get into troubles.
After attending the workshop you will be able to do & know:
* create empty or import existing databases
* resolve various problems during the import of database
* manage database project and its objects
* handling CLR objects
* store data in a project (static data, dictionary, etc.)
* what should be a part of a project and what shouldn't (Linked Servers, security)
* where and how to keep SQL Jobs
* split database project into more chunks and why it's required sometimes
* cope with unlimited amount of projects
* Avoid known issues like: temp tables, triggers, circular references, OPENQUERY, lack of validation
* Migrate project to Microsoft Azure (Cloud!)
* Use a hybrid approach
* Apply tSQLt Unit Tests
* Make a deployment manually and automatically (Octopus)
* Distinguish (finally) all three types of "Continuous"
* some helpful PowerShell scripts

We are going to show you commercial tools as well as some tips& tricks as well.

Basic TSQL knowledge
Basic Visual Studio knowledge

The workshop will be done in the Visual Studio 2017 with the newest SSDT installed but you can use older version of the Visual Studio as well
You can take your laptop with you as most of the task we are going to do with you!
You will have access to all codes and slide deck.

Kamil Nowiński @NowinskiK
http://SQLPlayer.net (blog)

Databases with SSDT: Deployment in CI/CD process with Azure DevOps

When working on a database in SSDT, there is a need to deploy our changes to further environments and at the same time maintain the consistency of databases between environments. During the session, I will present how we can publish the solution manually and then go to the Continuous Integration and Continuous Deployment process using the Azure DevOps environment (formerly VSTS). In addition, we will work on inserting the unit tests, approval steps and the others using Pester and PowerShell in order to gain full automation in our database deployment process.

SSDT allows you to import and maintenance a database project within Visual Studio. Prepare a few steps more to test and deploy changes and data into target SQL Server with Azure DevOps pipelines.

When and how to migrate to Azure Synapse?

I'm very glad that Microsoft changed the name. Azure Synapse is Azure SQL Data Warehouse evolved.
Moreover: Azure Synapse is much more than a single service now.
During the session, I will explain what's that and why migration to this MPP architecture is not as simple as "copy-paste" of dataset. Therefore, you might be wondering: "how to move my data from on-premise Data Warehouse to Azure Synapse?".
This session reveals the ideas on how to do that, how to recognize whether your company is ready for this move or not really yet? What the best practices are and what other tools does Azure Synapse offer to leverage?
The session is for everyone who knows the data warehousing concept and wants to broad theirs horizons with modern capabilities of processing the data at scale.

The session is for everyone who knows the data warehousing concept and wants to broad theirs horizons with modern capabilities of processing the data at scale.

Azure Databricks 101

Many sources? Various format? Unstructured data? Big Data? You might think that these only a buzz words. Not really. These days it's a part of modern data flow architecture. No matter what do you use - SQL Server, Cosmos DB, Azure SQL DW, Azure Data Factory, Data Lake... somewhere there you can find Databricks. So, the question is: what the Azure Databricks is and which scenario it could be used in?
Use Databricks to analyse large DataSets at scale, write Python, Scala or SQL command in one notebook to ingest, process and push the data to the required target. Use Databricks' notebook as a part of Azure Data Factory pipeline. We also will try to answer whether Databricks would replace SSIS as a modern ETL/ELT process?
If you are wondering about all these things - you should join me in this session.

Azure Databricks for the beginners where we will try to understand in which scenarios the notebooks and Spark cluster can be leverage and helpful.

Lightning Talk: Reference/master data for database project

By default, SSDT (SQL Server Data Tools) does not offer capabilities for deploying data of server-level objects. In this talk, I will show you how quickly fill that gap and generate a script with INSERT/MERGE statements in it.

Lightning Talk: Cosmos DB - when yes and when not?

Azure Cosmos DB offers single-digit-millisecond data access to NoSQL database. But what that does mean and when exactly we should use it? We will go through a few scenarios where Cosmos DB suits very well and an example when completely doesn't.

Azure Cosmos DB introduction

Cosmos DB is Microsoft's globally distributed, multi-model database service. It's a database from NoSQL family, but it does not mean that SQL is not engaged in there. During the session, I will explain what is the service, how many different APIs we can use, how elastically you can scale throughput and storage and what kind of scenarios are good to go with this technology. The demo shows you how to start with Cosmos DB and what kind of things you should be aware of.

Past and future events


8 Mar 2022 - 12 Mar 2022
London, England, United Kingdom

#DataWeekender v3.1

15 May 2021

Global Azure 2021

15 Apr 2021 - 17 Apr 2021

SQLDay 2020

30 Nov 2020 - 2 Dec 2020
Wrocław, Lower Silesia, Poland

dataMinds Connect 2020 (Virtual Edition)

13 Oct 2020
Mechelen, Flanders, Belgium

SQLBits 2020

29 Sep 2020 - 3 Oct 2020
London, England, United Kingdom

SQLSaturday Slovenia

14 Dec 2019
Ljubljana, Slovenia

SQL Saturday #926 Lisbon

30 Nov 2019
Lisbon, Portugal

Data Relay 2019

7 Oct 2019 - 11 Oct 2019

SQL Saturday #904 Madrid

28 Sep 2019
Madrid, Spain

SQL Saturday #898 Gothenburg

14 Sep 2019
Göteborg, Västra Götaland, Sweden

SQL Saturday #857 Kyiv

18 May 2019
Kyiv, Kyiv City, Ukraine

SQLDay 2019

13 May 2019 - 15 May 2019
Wrocław, Lower Silesia, Poland

Data in Devon 2019

27 Apr 2019
Exeter, England, United Kingdom

SQLBits 2019

27 Feb 2019 - 2 Mar 2019
Manchester, England, United Kingdom

SQL Saturday #829 Pordenone

23 Feb 2019
Pordenone, Friuli Venezia Giulia, Italy