Kamil Nowinski

Information & Communications Technology

sql server SQL Server Integration Services SQL Server Data Tools power bi Azure Data Factory Azure SQL DW devops

London, United Kingdom

Kamil Nowinski

Altius, Principal Microsoft Consultant, Data Platform MVP

Blogger, speaker, #sqlfamily member. Data passionate, Data engineer and Architect.
Over 15 years of programming and experience with SQL Server databases (since 2000 version) he confirmed by certificates MCITP, MCP, MCTS, MCSA, MCSE Data Platform & Data management & analytics. He worked both as developer and administrator big databases designing systems from the scratch. Passionate about tuning database engines, code transparency and maximising performance database engine.
He ended the work of the architect in a project of building a data warehouse for the Ministry of finance in the e-duty program three years ago.
Currently, he is expanding new horizons as a contractor in the UK market, making troubleshooting, prototyping BI solutions, enhancing existing processes among Microsoft stuff environments, popularizing the DevOps approach and efficient solutions in the cloud.
He loves self-development, automation, explore new technology and share his knowledge with everybody who wants to.
From many years tied with Data Community Poland (former PLSSUG), between 2012 and 2018 acted as a Member of the Audit Committee. He worked a couple years as a volunteer and now as a co-organizer and speaker of the biggest SQL Server conference in Poland (SQLDay).
An originator of the "Ask SQL Family" podcast and founder of SQLPlayer blog.
Privately happy husband and father of two wonderful girls.

Current sessions

Move part of your body to Azure Data Warehouse

Azure is cheaper, Azure is faster, Azure is more secure. Azure... everywhere is azure. Everywhere is data.
Even if not today, certainly in the future (yes, believe me) you will face a case: how to move my data from premise Data Warehouse to Azure.
This session will reveal the ideas how to do that and compare those methods. I will describe potential issues and give you hints on how to avoid them.
Finally, we will see what speed we can achieve during a migration.


Azure Data Factory v2 with Data Flows capabilities

Microsoft's services in Azure helps us to leverage big data more easily and even more often accessible for non-technical users. Having UI in ADF version 2 - Microsoft added a new feature: Data Flow which resembles components of SSIS. This is a very user-friendly and non-code approach tool-set.
But, has that been only UI introduction? Why and how Databricks does work under the hood?
Do you want to know this new (still in private preview) feature of ADF and reveal the power of modern big data processes without knowledge of such languages like Python or Scala?
We will review this new feature of ADFv2, do deep dive to understand the mentioned techniques, compare them to SSIS and/or T-SQL and learn how modelled data flow runs Scala behind the scenes.


Data replication - who with whom, for whom, why and for what?

During this session we will review all types of replications in SQL Server and find out principal differences between them and when should be applied. Besides examples of practical scenarios, we will consider what we have as alternatives and why the AlwaysOn is not one of them. As always on my sessions quite a lot of demonstrations and T-SQL code.


Understand better Transaction Isolation Levels

SQL Server is an extraordinarily powerful engine of relational databases, which lets you achieve high scalability of data platform. For many years SQL Server gains more and more new features and more efficient mechanisms including InMemory or ColumnStore Indexes. However, there is still many companies not using those features and struggling with performance issues, which the root cause turn out the problems with concurrency.
Let's back to the basics in order to better understand transaction isolation levels available in SQL Server. On this session we will learn about a concurrency issue, (not)expected behaviours, lost modifications and consider how to cope with them. I will tell what the optimistic and pessimistic concurrency models are, when to use it and what tempdb has in common with them. Also, we will see in practice how dangerous (NOLOCK) might be which is being used so passionately by developers.


Software Development Life Cycle for databases as a part of nowadays DevOps (pre-conf)

Nowadays the DevOps is the topic number one in many industries and companies. However, in some case, you see that the code repository is or will be the most important topic. The number of tasks could be overwhelming at first glance, however, there is no other way - you have to use the new tools and solutions
The code repository term is not really new but the way it is integrated with the database world is still sometimes questioned. And there are a lot more than that just naming Continuous integration, Continuous Delivery or Continuous Deployment.
We would like to show you tools to efficiently manage database projects (SQL Server Data Tools), how to start working with projects, how to deal with problems you will probably see during daily operations and how to configure and manage the projects' deployment process on different environments.

We will go to the Software Development Life Cycle (SDLC) process in great details from the database point of view. But we will not spend too much time on analysis but rather on the development part. We would like to show the usage of the Octopus application.
Of course, there will be an entire module about best practices and how to efficiently use them in the database projects.
In the end, we would like to touch the cloud and show how to migrate the existing on-premises database to the Microsoft Azure SQL Database and how no to get into troubles.
After attending the workshop you will be able to do & know:
* create empty or import existing databases
* resolve various problems during the import of database
* manage database project and its objects
* handling CLR objects
* store data in a project (static data, dictionary, etc.)
* what should be a part of a project and what shouldn't (Linked Servers, security)
* where and how to keep SQL Jobs
* split database project into more chunks and why it's required sometimes
* cope with unlimited amount of projects
* Avoid known issues like: temp tables, triggers, circular references, OPENQUERY, lack of validation
* Migrate project to Microsoft Azure (Cloud!)
* Use a hybrid approach
* Apply tSQLt Unit Tests
* Make a deployment manually and automatically (Octopus)
* Distinguish (finally) all three types of "Continuous"
* some helpful PowerShell scripts

We are going to show you commercial tools as well as some tips& tricks as well.

Requirements:
Basic TSQL knowledge
Basic Visual Studio knowledge

The workshop will be done in the Visual Studio 2017 with the newest SSDT installed but you can use older version of the Visual Studio as well
You can take your laptop with you as most of the task we are going to do with you!
You will have access to all codes and slide deck.

Trainers:
Kamil NowiƄski @NowinskiK
http://SQLPlayer.net (blog)


Maintain a Database Project, and Continuous Delivery using Microsoft Data Tools in practical terms

A task seems to be easy. Maintenance a project of a database in the code repository, treat as master-version and do deployment evenly and frequently. Simple? Seemingly. The things become more complex as fast as a number of objects in database growing. While instead of one database, we have over a dozen. When databases have got the references to each other. And how about dictionary tables? Where to keep them and how to script? Additional issues are coming whilst we would like to control instance-level objects.
All these topics I will explain in the session focused on practical aspects of work with Microsoft Visual Studio Data Tools.