Speaker

Thomas Tomow

Thomas Tomow

Azure MVP - Cloud, IoT & AI / Co-Founder Xebia MS Germany (former Xpirit Germany)

Stockach, Germany

Actions

As a Microsoft MVP for Azure, Thomas Tomow actively contributes to the community and is passionate about modern cloud technologies. He hosts Azure Meetups in Konstanz and Stuttgart, Germany. He is the Chief Technology Officer (CTO) and Chief Operations Officer (COO) at Xebia MS Germany, where he leverages his expertise in Cloud Development, AI, and many other bleeding-edge technologies. Thomas practices karate to maintain balance in his free time and is always eager to share his knowledge and experience with like-minded individuals.

Badges

  • Microsoft MVP

Area of Expertise

  • Information & Communications Technology

Topics

  • Azure IoT
  • Azure DevOps
  • Azure Data & AI
  • Azure container Apps
  • DevOps
  • GitHub Actions
  • .NET
  • Leadership
  • Robotics

Sessions

Quickly prepare image data for an AI model yourselfende

This talk is aimed at developers who need to pre-process data for their "AI model training" without a lot of science work and lengthy data preparation. After the short time that has passed since the hype around AI, there are quite a few approaches and pre-trained models that can be used.
My talk will demonstrate a practical approach from the field of vision. The goal is to use the example of the development of an own object detection, which recognizes not only car, person etc., but special objects like garbage can or mother-in-law. The necessary data collection and processing of this is in the foreground. I will discuss simple ways to quickly process the gigabytes of image material in order to achieve quick results. With a few simple steps, the participants can generate their own AI models in their own projects and quickly become creative in building them out in their own ways.
At the end of the talk, a demo will show the way from raw data to a simple application that uses our trained model and recognizes different objects.

This talk is aimed at software developers who already have some experience with AI and not much with the subject area DataScientists.
My intention is to bring the topic of AI closer to developers, as I believe that in the near future AI will be increasingly dominated by software developers.
For my talk I need some space on a table, as I am setting up a small "IoT" hardware that is supposed to recognize things with a WebCam.

Master Class: Architecting and Development of Custom Copilotsende

As a professional architect or software developer, you are challenged to develop new applications that utilize Large Labguage Models (LLM).
In this master class, we first explain how LLMs and GenAI work under the hood. We'll start with the basics and dive deep into topics such as token generation, vector similarity, embeddings, different model types, plugins, agents and much more. We will demonstrate how .NET developers and developers from other fields can create AI agents that run intelligent code in any application via numerous demos.

You will also learn about Microsoft's Semantic Kernel and how to build applications and powerful backends that integrate AI into the Azure platform.

This master class is designed for professional software architects and developers.

Technology, that is used: LLM from (Azure) OpenAI, Semantic Kernel, .NET

Big Brother is watching you - Local GenAI in applicationsende

Maybe you're not comfortable using LLMs in the cloud, maybe your customers have a reason to avoid the services, or or or... There are many reasons why you don't use the LLMs of the major providers, but you don't have to do without GenAI capabilities in the application.
There are ways out, which Thomas Tomow would like to show you in his presentation and illustrates with code.
He will show you what you need, how to set up your environment and what you need to include in your code to build a GenAI-enhanced application.
By the end of the talk, you will know the key differences between the cloud LLMs and the local models. You will be able to build your own model zoo and use it with your C# applications.

Optimizing services with Azure ContainerApps and DAPR: A hands-on workshopende

In this workshop, we explore how Azure ContainerApps, in conjunction with DAPR, can improve the development and management of services. We will introduce the challenges of distributed systems and show practical ways to overcome them.
Participants will learn about the benefits and uses of Azure ContainerApps and DAPR and understand the challenges of developing distributed services.
The workshop includes an introduction to DAPR and Azure ContainerApps, a discussion of challenges with distributed services, step-by-step building of a sample application, improving the service architecture with DAPR, and the necessary deployment process.
This topic is aimed at developers who want to deepen their knowledge of cloud development and gain practical experience in using modern tools and platforms.

For this workshop, people need to have an Azure subscription. They bring a laptop and have VSCode or Visual Studio installed.
With this knowledge they can also work in project with similar technologies.

Shrinking giants: Why large language models become less bigen

In his talk, Thomas Tomow explores why the miniaturization of Large Language Models (LLMs) is so important in today's technological world. Thomas focuses on the significant benefits of smaller and more efficient models in various fields such as robotics, healthcare, and customer service.

He provides insights into the innovative methods that enable the downsizing of LLMs, including model compression and fine-tuning. These approaches allow models to remain powerful and flexible despite their reduced size while saving resources.

Thomas also sheds light on the challenges and opportunities associated with developing and using these more compact AI models. Not only the technical aspects but also ethical and practical considerations are taken into account, particularly with regard to the use of these technologies in sensitive areas.

Finally, Thomas Tomow looks at the future prospects of this technology. He discusses how ongoing research and innovation can help develop these models further to create effective and ethical solutions for various applications.

Companies, especially in audited fields, often lack free access to the cloud for several reasons. Generally, data residency and ownership are what are needed to have Large Language Models hosted on-premises. But OpenAIs ChatGPT or GEMINI aren't simply downloadable, so this is I am talking about: The possibility to do it nevertheless and have much more control over you own data in conjunction with the comfort of ChatGPT-Style.

First Arm, then Bicep, now Radius: Microsoft's approach to simple deploymentsende

The evolution of cloud-native application development reaches a new dimension with Microsoft's Radius. In my talk, Radius will be presented as a forward-thinking platform that is primarily focused on the Microsoft ecosystem but also bridges to other cloud platforms such as AWS and GCP. This universal compatibility makes Radius a valuable resource for many cloud developers.

Radius integrates proven technologies such as Kubernetes, Terraform, and Bicep to create an efficient and flexible development environment that is not only relevant for Azure developers but also for users of other cloud services. Thomas' talk will focus on introducing the key features of Radius, especially the "Recipes" feature that automates the provisioning and configuration of infrastructure resources.

The demonstration of Radius during the talk will provide a practical insight into the applications and benefits of the platform, with a focus on simplifying and accelerating development processes. This demonstration will show how Radius helps developers to work more efficiently regardless of their preferred cloud platform.

Finally, Thomas will highlight the strategic importance of Radius in the landscape of cloud-native development and discuss how it extends the Microsoft ecosystem while providing a welcoming environment for developers from different cloud environments. Radius thus exemplifies a platform that promotes flexibility and openness in cloud development and bridges the boundaries between different cloud ecosystems.

Radius is a new "technology" comming from the Microsoft Incubation team their goal was to create a layer ontop of Azure deployments with bicep, to with a more abstract idea like "create a database and the application in Azure". This leverages the ease to change the details under the hood and enables also switching resources, while keeping the solution. That is currently new stuff.

Legacy App Modernization - How to start right?ende

In this talk, Thomas Tomow will focus on the increasingly important topic of legacy app modernization. He will discuss the challenges that businesses face in this process, such as selecting the right technology, deciding what to modernize, and developing a modernization strategy. But even when the plan is in place and the modernization process can begin, a crucial question often arises: how do we start?
Thomas will address this essential question in his talk and share practical tips and strategies from his own experience to untangle the "Gordian knot" and start the modernization process successfully. Using a real-world example, he will discuss the necessary collaboration models and project frameworks, as well as technological frameworks such as static code analysis and SBOMs, to ensure the success of the modernization. This talk is targeted at technical audiences and provides valuable tips and best practices for getting started with legacy app modernization.

The target audience, such as DevOps engineers, developers, or IT architects, are not so technical. This is not a guide for project managers. I am sharing my own experience, which is why I know the topic well. I have several years of experience as a speaker. With my YouTube channel, I am trying to improve my presentation skills.

Hello postman! AI on the edge detects you. ende

#IoT is a familiar topic, #AI is the new topic! Bringing the two together and you have amazing opportunities. The field of machine learning is gaining momentum and is now a lot easier to use for people like us - developers. It is important for me to take away the "fear" of AI topics and to show the new possibilities of Machine Learning for people, that are Non-Data scientists.
Based on my own story and a live demo, I guide the participant from the idea to the practical implementation in the field of AI (enabled through Azure). I will discuss technologies such as Cognitive Services, Container as well as IoT Edge and explain the technical procedure. At the end of my presentation, the participants will understand the potentials as well as the challenges and will get the first entry points for their own scenarios.

Top 3 Highlights of Attending:
- See building a Custom Vision Model in Azure and running it on the device
- Watch live, how to detecting a post car in the room, by a device, consuming, what we build in Azure
- It's a real-life story, so the audience gets pure experiences

MLOps - was Data Scientists von den Entwicklern lernen könnendeen

Wo DevOps die Zusammenarbeit zwischen Entwicklern und IT Pros versucht zu verbessern, müssen nun auch die Data Scientists mit den Developer und IT-Pros ihren Weg finden. Dabei lässt sich das Konzept des DevOps auch sehr gut auf das Thema AI erweitern - MLOps (DataScience Ops).
Dieser Talk geht auf die Arbeitsweise der Data Scientists ein und zeigt, wie sie mit den Developer und den IT-Pros zusammen arbeiten können. Dabei werden unter anderem folgende Fragen beantwortet:
- Welcher Flow wird mit MLOps abgebildet?
- Wo lege ich Trainings Code ab?
- Wie kann ich meine Modelle verwalten (Versionierung / History)?
- Wie funktioniert das Monitoring?
- Kann ich auch automatisieren - wie bekomme ich das in Produktion?
Durch einen exemplarischen Flow gehe ich auf die Antworten ein und demonstriere, welche Tools, wie Git, Azure ML Service und GitHub die Umsetzung ermöglichen.
Am Ende der Session verstehen die Teilnehmer warum MLOps wichtig ist und wie die Tools richtig eingesetzt werden können, um einen reibungslosen Ablauf zu unterstützen.

Ich bin auch im Beruf häufig mit diesem Thema unterwegs und sehe eine stetige Notwendigkeit, über dieses Thema zu sprechen. In den Unternemen sehe ich häufig ein hohes Interesse, MLOps ein zuführen und die entsprechenden Tools kennen zu lernen.
Die Erfahrungen aus der DevOps Journey und die Möglichkeiten im Bereich AI sollten die DataScientists aber auch die Software Entwickler im AI Umfeld nutzen. Ich glaube hier für jeden etwas anbieten zu können.

Ringbased Gated Releases - Mit GitHub Actions sichere und automatisierte Deploymentsdeen

Haben Sie sich schon einmal gefragt, warum viele Deployments immer noch manuell freigegeben werden, obwohl sie oft nur wenige Informationen enthalten und kaum als wirkliche Absicherung dienen? Wenn Sie ringbasierte Deployments verwenden, ist es umso wichtiger, den Automatisierungsgrad zu erhöhen und nicht auf manuelle Freigaben zu vertrauen. Hier kommen vergatterte "gated" Releases ins Spiel.

Gated Releases ermöglichen es, verschiedene Signale zu prüfen, bevor die Anwendung in die nächste Umgebung übertragen wird. Dazu zählen beispielsweise Systemzustände wie Prozessor, RAM und I/O, unbehandelte Ausnahmen in Logfiles oder beendete Prozesse. Mit gated Releases haben Sie die Möglichkeit, den Release-Prozess vollständig zu automatisieren und sich so auf das nächste Level von Continuous Deployment zu bringen.

In diesem praxisorientierten Vortrag zeigt Thomas, wie Sie mit GitHub Actions und der GitHub API komplett automatisierte Gated Releases für ringbasierte Deployments umsetzen können. Dieser Vortrag richtet sich an DevOps-Engineers und Entwickler, die ihre Release-Pipelines verbessern und optimieren wollen.

Meine Session kann von 45min bis 60min Inhalt liefern. Sie richtet sich an Entwickler wie auch an "DevOps Engineers" o.Ä.
Da ich hier eine Demo mit Hardware als ShowCase einbauen möchte (Kann , kein Muss) benötige ich etwas Platz [etwa A3-Fläche]. Da Github mittlerweile eine sehr wichtige Platform zur Entwicklung ist, und Gated-Releases noch nicht OutOfTheBox in GitHub vorhanden, wird dies eine sehr wertvolle Session für die Teilnehmer - besonders für die aus dem Enterprise-Sektor.

Kubernetes für Jedermann - Mit Azure Container Apps schneller ans Zieldeen

Bei der Arbeit mit Docker Container stößt man meist oft auf das Thema Kubernetes. Leider ist es nicht gerade dafür bekannt mal eben schnell „installiert“ und administriert zu sein. Es gibt unterschiedlichen Netzwerk-Terminologien zu kennen, Security-Mechanismen zu etablieren, diverse Ressourcen, wie Compute, Storage usw. bereit zu halten und vieles mehr. Für Entwickler und IT Pros ohne entsprechendes Wissen und Erfahrung, also ein große Herausforderung in Zeit und Nerven, eigene Container Applikationen zu betreiben.

Mit Azure Container Apps stellt Microsoft einen weiteren Service zum Verwalten von Container zur Verfügung. Dieser ermöglicht eine Hosting Infrastruktur mit Konzepten aus der Kuberneteswelt, jedoch mit nahezu keinem notwendigem Experten Wissen. Dabei braucht man nicht auf KEDA Scaler verzichten, kann komfortabel App-Environments definieren und mit Dapr entkoppelte Services Architekturen ganz bequem aufbauen.

„So, What is in for me?“ Thomas Tomow stellt Azure Container Apps vor und erläutert, für wen Dieser Service geeignet ist und was mit dem Diesem bewerkstelligt werden kann. Auch über dessen Limitierungen spricht er an und geht auf Lifecycle sowie Deployments ein. Natürlich dürfen Kosten- und Alternativen-Vergleiche nicht außer Acht gelassen werden.
Um das „Warum“ noch besser zu unterstreichen, wird Thomas die Stärken in einer Demonstration auf zeigen. Damit kann der Teilnehmer am Ende der Session diesen Service genau einordnen und versteht dessen besonderen Nutzen und seine Grenzen, so dass er Diesen auch selbst nutzen kann.

Dieser Talk richtet sich an Entwickler und IT Pro, die sich weniger um die Einrichtung von Grundlagen, wie Kubernetes Cluster kümmern möchten, sondern eher ihre Lösungen an den Start bekommen wollen. Auch geht es um das Reduzieren von Overhead in der Infrastruktur. Teilnehmer können hiermit für sich besser abschätzen, welchen Weg sie für ihre Deployments von Docker Images nehmen können. Da die Welt zunehmend Docker spricht, ist dieser Talk durchaus interessant für Neulinge, aber auch Alteingesessene, die nach neuen Wegen suchen.

Revolutionizing DevOps: Harnessing AI for unprecedented efficiencyende

Artificial Intelligence (AI) is reshaping industries worldwide, and its profound influence extends to DevOps practices. In this enlightening conference talk, join Thomas as he delves into the extensive possibilities of enhancing DevOps processes through data and AI integration. From optimizing code development (DevEx) and streamlining pipelines to enhancing production environments with AI-driven monitoring, anomaly detection, and self-healing systems, they will illustrate the remarkable synergy between AI bots and ChatOps. Discover how these AI-driven solutions can efficiently resolve incidents and assign the most qualified engineers to ongoing tasks, ushering in a new era of DevOps efficiency and effectiveness.

Already presented a short version as a Keynote at DevOpsCon Berlin - English/ for longer sessions, I have more Demo and technology - go more deep into several aspects

RAG-nificent: Setting a New Standard in Intelligent Information Retrievalende

In my presentation on Retrieval Augmented Generation (RAG), I will give a comprehensive overview of this key technology in the modern search landscape. RAG combines retrieval and generation models to deliver more accurate and contextualized search results. I'll explain the technical foundations, including vector representation, indexing, vector databases, and integration with large language models (LLMs).
I'll highlight recent advances in RAG-based search systems, such as intent detection, routing, and hallucination prevention. Using personal experiences, I'll show how RAG is used in different industries to optimize search processes and improve user experiences. I'll also discuss current challenges in implementing RAG systems and present innovative solutions.
Finally, I'll look at the future of RAG in search and potential developments that could revolutionize how we find and process information.

This is a brand new presentation. I am incorporating some of my own experience that I have gained in customer projects.

The actor model and Azure Container App Services: Leveraging Orleans to build distributed systems wiende

In this talk, we will explore the use of actors in Azure Container App Services to maintain state within distributed systems, while also leveraging the developer-friendly serverless approach. Actors are a programming model that enables developers to build highly concurrent, scalable, and reliable applications using the actor pattern. In the context of Azure Container App Services, actors allow developers to build stateful, microservices-based applications that can be deployed and managed in containers, with the added benefit of a serverless approach that abstracts away infrastructure management and scaling, making it easy for developers to focus on writing their code. One framework that makes it easy to implement actors in Azure Container App Services is Orleans. Orleans is a cross-platform, open-source framework that simplifies the development of distributed systems using the actor model.
Throughout the talk, Thomas will be demonstrating the concepts through a live practical demo, which will help to get a better understanding of the topic and see the concepts in action. He will discuss the benefits of using Orleans and actors in Azure Container App Services, including improved performance, scalability, and reliability, while also leveraging the developer-friendly serverless approach, as well as provide guidance on how to get started with Orleans and actors in Azure Container App Services. This talk will be of interest to developers and architects who are looking to build stateful, scalable, and reliable applications in the cloud.

That is a brand new talk, that show the power of being serverless with state. This talk will be interesting for devs and architects, because the talks topic will enable them to build solutions instead of managing infrastructure.
I require internet connection and perhaps an additional space for placing an IoT device, if possible.

Scaling AI Agents: Unlocking Cloud Potential with DAPR and Semantic Kernelende

Scaling AI agents to meet high user demand is often seen as a daunting challenge. However, with the right tools and strategies, it can be more straightforward than you might think. In this talk, I offer a fresh perspective on scaling AI agents in the cloud, challenging traditional approaches and introducing innovative solutions.
This talk addresses common challenges in AI agent scaling, including resource management, latency issues, and integration complexities. We'll explore how DAPR's sidecar pattern mitigates these problems by abstracting away infrastructure concerns, while Semantic Kernel's modular approach simplifies AI integration and improves maintainability.
I will demonstrate how DAPR and Semantic Kernel can revolutionize your AI integration strategy. By showcasing code snippets and implementation techniques, you'll gain practical insights into overcoming scalability challenges and transforming your applications.
Key learning objectives include:
- Understanding architectural patterns for scalable AI agents
- Implementing efficient resource management strategies
- Leveraging DAPR and Semantic Kernel for seamless AI integration
Developing resilient and performant cloud-based AI solutions
By the end of this session, you'll have actionable strategies and hands-on knowledge to implement scalable AI agents, enhancing both performance and user experience. Gain the confidence to deploy cloud-based AI solutions effectively and unlock new potential for your applications.

This is a brand new presentation. I am incorporating some of my own experience that I have gained in customer projects.

Robots 2.0: When artificial intelligence meets steelende

The integration of artificial intelligence into robotics is leading to a new era of intelligent autonomous systems. This presentation will explore how advanced AI technologies, including vision-language models (VLMs) and object detection, are enhancing the capabilities of robots in various industries, such as manufacturing, healthcare, and logistics.
We will highlight key robotic developments, including examples like Figure, X1, and Optimus, discussing their advancements and limitations. A significant focus will be on the challenges of deploying AI on limited hardware, ensuring safety, and facilitating real-world interactions.
I will also showcase my own robot creation, explaining how VLMs and object detection are used to enhance understanding and interaction with the environment. Join us to discover how AI is reshaping the future of robotics and learn how to leverage these technologies to create smarter, more capable robotic systems.

This is a brand new talk. It deals in particular with a topic that will become important in the medium term. Increasingly, the field of robotics is becoming more interesting for developers of all backgrounds. The combination of abstracted hardware and modern dev topics will certainly arouse a lot of interest, but also uncover new potential for use. I focus more on devs than on hardware engineers.

Master Class: Building a Retrieval Augmented Generation (RAG) Architecture with C#ende

Unlock the potential of Retrieval Augmented Generation (RAG) in this hands-on masterclass designed for developers, data scientists, and IT decision-makers. Dive deep into the integration of Kernel Memories and Semantic Kernel using C#, and transform your understanding of modern search technologies.

Begin with a foundational overview of Large Language Models (LLMs) and embedding models, exploring key concepts such as vectorization and vector databases. Gain insights into effective search strategies and the technical underpinnings of RAG, which combines retrieval and generation models for precise, context-aware search results.

Participants will engage in setting up a complete RAG architecture, learning through practical exercises and real-world examples. Discover how to enhance out-of-the-box solutions like Azure AI Search with Azure OpenAI Service to achieve more reliable search results. Learn to structure documents and data effectively, and understand why successful search involves more than just typing a query. Explore additional search strategies to maximize the capabilities of your search engines.

With experience spanning projects in legal, contracting, and first-level support search, I bring a wealth of practical knowledge to the table. Equip yourself with actionable strategies and practical insights to integrate RAG into your projects, enhancing search processes and user experiences. Join us for a transformative day of learning and innovation.

.NET Developer Conference '24Sessionize Event

November 2024 Köln, Germany

Microsoft Student Community Tour 2023Sessionize Event

November 2023

Developer Week '23Sessionize Event

June 2023 Nürnberg, Germany

NDC Oslo 2023Sessionize Event

May 2023 Oslo, Norway

Developer Week '22Sessionize Event

July 2022 Nürnberg, Germany

Global Azure Munich 2022Sessionize Event

May 2022 Munich, Germany

Developer Week '20Sessionize Event

June 2020 Nürnberg, Germany

Cloud DeveloperDays 2020Sessionize Event

May 2020

Thomas Tomow

Azure MVP - Cloud, IoT & AI / Co-Founder Xebia MS Germany (former Xpirit Germany)

Stockach, Germany

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.