
Tillmann Eitelberg
oh22information services GmbH
Königswinter, Germany
Actions
Tillmann Eitelberg is CEO and co-founder of oh22information services GmbH, which specializes in data management and data governance and offers its own cloud born data quality solution, HEDDA.IO.
Tillmann is a regular speaker at international conferences and an active blogger and podcaster at DECOMPOSE.IO. He has open sourced several SSIS components and is Co-Author of Power BI for Dummies (German Edition). Since 2013 is Tillmann is awarded as Microsoft Data Platform MVP. He is a user group leader for the PASS Germany RG Rheinland (Cologne) and a member of the Microsoft Azure Data Community Advisory Board.
Links
Area of Expertise
Topics
Using DuckDB on Microsoft Fabric
DuckDB has been shaking up the analytics space as an in-memory OLAP database that’s lightweight, incredibly fast, and easy to use. While it has been a hot topic in the Databricks community, its role in Microsoft Fabric is less frequently discussed—but no less exciting. So, how well do DuckDB and Fabric actually play together?
In this session, we’ll explore the current state of DuckDB in the Fabric ecosystem, including where it fits in, what’s possible today, and what gaps still exist. We’ll compare its capabilities with Fabric’s built-in engines, analyze how it interacts with OneLake and Delta Tables, and evaluate its performance in real-world scenarios.
Through practical examples and live demos, you’ll learn how DuckDB can accelerate query execution, simplify data transformations, and complement Fabric’s native tools. We’ll also discuss when DuckDB is a game-changer—and when you might be better off with other options.
By the end of this talk, you’ll have a clear understanding of how to integrate DuckDB into your Fabric workflows, what advantages (and limitations) to expect, and whether it deserves a spot in your analytical toolbox.
AI-Powered Search On-Prem with Native Vector Support in SQL Server 2025
Vector search is at the core of modern AI applications, powering recommendation engines, semantic search, and image recognition. Traditionally, implementing vector-based queries required adding a dedicated vector database, introducing complexity and additional infrastructure. But with SQL Server 2025, you can now store, query, and optimize vector embeddings natively - all within your existing environment, running entirely on-premises.
Why does this matter? For organizations needing full control over their data, SQL Server 2025 enables AI-powered search without relying on external cloud services or third-party databases. Whether working with text, images, or other unstructured data, you can leverage vector search while keeping everything inside your on-premises SQL Server environment. Built-in Row-Level Security (RLS) ensures that sensitive vector data remains protected, making it enterprise-ready for governance and compliance.
In this demo-driven session, we'll break down what vectors are, why they’re essential for AI applications, and how SQL Server 2025 simplifies vector search. You'll learn how to store and query vector embeddings efficiently, explore real-world scenarios like AI-powered recommendations and intelligent search, and discover best practices for indexing and optimizing performance.
If you're looking for a way to integrate AI-powered search within your on-premises SQL Server - without external infrastructure - this is a session you won’t want to miss!
Data Evolution: Harnessing Version Control for Effective Data Management
In today's data-driven world, where information is the lifeblood of decision making, the seamless management and evolution of data is paramount. Data version control, like its software counterpart, allows organisations to maintain a historical record of data changes and facilitates a collaborative approach to data management. It provides an indispensable framework for tracking and managing changes to data, ensuring data integrity, collaboration and, most importantly, the systematic evolution of data integration processes.
But is data version control also becoming a key concept in data-driven projects, similar to the version control systems used in software development?
In this Session, we explore the importance of data version control in the context of data management, highlighting its key role in navigating the complexities of data integration and evolution. We present various projects and solutions such as dolt, DVC or lakeFS and show how they can be used effectively in projects on the Microsoft Data Platform.
Data Observability with HEDDA.IO
Are you able to fully understand the state of your data in your systems? Are you able to evaluate the quality of your data in all processes?
In this session you will learn how easy it is to integrate HEDDA.IO into your existing processes, how it can provide you with a continuous view of the quality of your data and how it can inform you directly in the event of errors, warnings or major changes to individual parameters.
HEDDA.IO brings data observability to your process and platforms.
Data Quality Roundtrip 2023 in the Microsoft Data Platform
Data Quality Tools and Services in the Data Platform has always played a niche role. Anyone remembers DQS in SQL Server? As Data Governance plays more and more an important part in the strategy of Microsoft, we will run a demo packed session to show you what is currently available in Fabric, in Azure Services and as well in Microsoft Purview. We will discuss the pitfalls, the potential and have look to the benefits and possible strategy for your data estate
Integrate Data Quality into your processes
HEDDA.IO is a central data quality management solution that connects departments, data stewards and data engineers. It helps to easily integrate standardization, cleansing, matching and enrichment tasks into existing processes.
We use high-performance runners, in order to integrate HEDDA.IO into existing processes of a Modern Data Stack in the best possible way. The HEDDA.IO runners exist for pyspark and .NET and can therefore be used without any problems in Databricks, Azure Synapse Analytics and also in Visual Code together with Polyglot Notebooks.
Developers can thus extend their processes developed in Notebooks with HEDDA.IO. The full integration provides developers with IntelliSense and a meaningful widget for the results of an execution. All HEDDA.IO runners are designed to run on their respective systems (Databricks, Synapse, etc.) and not on the HEDDA.IO services.
Join us for this session to learn how you can quickly and easily add a high performance data quality solution to your existing and new processes.
Simplify Your Data Quality Processes
Identifying relevant data quality issues and creating awareness within organizations of the threats and costs is an important step in an enterprise-wide data quality strategy. This includes implementing systems and processes that enable organizations to continuously monitor the value of data, as well as creating processes to directly address errors in data processing.
With HEDDA.IO we focus on these challenges and provide a system to integrate data quality into almost all processes within your organization.
In this session we will show how quickly and easily Azure Synapse Analytics, Azure Databricks or .NET Interactive data processes can be extended with a Data Quality component and how you can add data observabilities to different scenarios.

Tillmann Eitelberg
oh22information services GmbH
Königswinter, Germany
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top