Tori Tompkins
Principal AI Consultant at Advancing Analytics
London, United Kingdom
Actions
Tori is a Principal AI Consultant at Advancing Analytics and Microsoft AI MVP. Specialising in MLOps and LLMOps, Tori has worked on many ML and data science projects with Azure and Databricks at all stages of the ML Lifecycle. She is a co-presenter of the Data & AI podcast, Totally Skewed, founder of Girls Code Too UK and Trustee of Girls in Data.
Links
Area of Expertise
Topics
Building a Production-Ready Q&A App with Databricks AI Tools
This hands-on workshop guides participants through the full process of building a production-grade question-answering application using Databricks' integrated AI capabilities. Starting from a blank slate, attendees will learn how to use Agent Bricks to create intelligent agents, leverage Mosaic AI for orchestration and observability, and integrate vector stores for semantic search.
The session also covers how to evaluate and monitor models and agents using MLflow, and how to deploy the final application using Databricks Apps for real-world usability. By the end of the workshop, participants will have built and deployed a Retrieval-Augmented Generation (RAG) Q&A system capable of delivering context-aware answers from enterprise data.
This workshop is ideal for data professionals and developers looking to understand how Databricks supports the full lifecycle of intelligent application development from experimentation to deployment.
Real-Time AI with Databricks Online Feature Stores: Powered by Lakebase
Databricks has launched a new generation of Online Feature Stores powered by Lakebase, offering scalable, low-latency access to feature data for real-time machine learning and generative AI. This session introduces the new architecture, highlights key differences from previous feature store implementations, and explains when and how to use them effectively. Through live demos, we’ll explore publishing feature tables, enabling streaming updates, and serving features to real-time applications using Unity Catalog and Lakehouse-native workflows.
Stitching GenAI Pipelines with RAG in Microsoft Fabric
In the evolving landscape of data and AI, GenAI has emerged as a game-changer for organisations seeking to derive deeper insights from their data. Retrieval-Augmented Generation (RAG) is the ground-breaking technique of combining retrieval-based and generative AI models to seamlessly integrate and enhance data from multiple sources and reducing model hallucinations.
We will explore how RAG leverages GenAI to provide contextually enriched, high quality outputs. With a practical demonstrations, attendees will learn how RAG in Fabric can be implemented to solve complex data challenges. This session will cover the technical underpinnings of RAG, its application in various industries, and the measurable benefits it delivers along with practical guidance on where to get started.
This session is for data professionals, Fabric fans and Data Scientists aiming to harness the latest advancements in GenAI for enhanced data-driven outcomes.
No, You Aren’t Hallucinating – It’s RAG!
Retrieval-Augmented Generation (RAG) is one of the most effective strategies for making large language models (LLMs) smarter, more accurate, and domain-aware—without retraining them from scratch. In this fast-paced session, we'll break down the main components, why it is used and why it has become the go-to approach for enterprise AI.
AI at Scale: Productionising Gen AI in Azure
As AI technologies evolve, and with the increasing interest in Generative AI solutions, scaling them into robust, secure, and reliable production environments is a critical challenge. In this session, an AI Engineering Specialist join forces to demystify the process of productionising Generative AI and FMOps platforms in Azure.
By applying well-established security and engineering principles, we explore strategies to design scalable AI architectures, manage resources efficiently, and ensure enterprise-grade compliance. This talk will provide actionable insights and practical frameworks for bridging the gap between experimentation and deployment, empowering teams to operationalise AI with confidence.
Journey to the Centre of Large Language Models: A Beginner's Guide to Gen AI
You've probably noticed a big change in the data world lately — everybody's buzzing about Generative AI and Large Language Models (LLMs).
This session will explore everything you need to know about LLMs and Generative AI. Starting with the basics, we will cover how they actually work and how far they have come. Delving deeper, we'll discuss the capabilities and limitations of LLMs, providing insight into what they can and cannot achieve as well as address the ethical considerations and potential risks associated with their usage. We will end with use cases, a demo and what it takes to actually productise a LLM workflow.
Getting started with MLOps in Azure
We are being asked more and more to work with various aspects of data, regardless of our core skill set. This is particularly the case when productionising Machine Learning Models.
In this session we will talk about various Azure technologies we can use in our day jobs to achieve this, including ML Studio, Databricks & AKS.
We will look at the various components needed and different architectures that can be implemented, including how to manage Feature Stores and monitoring Model Life Cycles.
When working with data it is vital that different Data Professionals, Software Developers/Engineers & others in tech work in harmony together for successful outcomes. Consequently we will also cover how we can try to best achieve this, and how this has been done in the real world.
First Class MLOps with Databricks
Arguably the largest challenge in ML today is effectively deploying reliable and efficient models into production, with experts quoting that as many as 80% of model created never make it to production. MLOps streamlines the process of taking machine learning models to production, and then maintaining and monitoring them. With new MLOps micro-venders popping up every day, is there a tool that does everything?
In this session, we will consider Databricks as an end-to-end MLOps tool, exploring collaborative workspaces, feature stores, model registries and model serving, touching upon other critical MLOps practices such as model fairness, explainability and monitoring.
Including practical demos of Databricks Feature Store, MLFlow, Drift Monitoring and real-time Model Serving, this session is suitable for Data Scientists and Machine Learning Engineers of all levels.
Evaluating LLMs in Databricks with RAGAS and MLFlow
Evaluating LLMs is essential for ensuring they perform accurately and align with safety standards. This talk explores two frameworks for LLM Evaluation: RAGAS and MLFlow. We’ll explore practical applications of these frameworks, including a live demo that walks through setting up an evaluation pipeline, monitoring results, and refining metrics.
API Management for Azure OpenAI Service endpoints
Microsoft Build announced onclick deployment patterns for Azure Open AI in API Management for an all new setset of GenAI Gateway capabilities including load balancing, token limiting and semantic caching.
Learn how to enable and use this new capability (and why) in 5 minutes!
A Data Engineer, Scientist and Analyst Walk Into a Bar
In the fast-growing world of data, there is not only one specific skill set required to be a data professional. If you are looking for a role in data or trying to build out a data team, it can be hard to make sense of it all.
Hear from four diverse voices in the industry as they define each role, delve into the essential skillsets required to excel in them and share their personal journeys into data. Discover how these roles collaborate and complement one another on a day-to-day basis and what a typical data project involving them would look like. Whether you're interested in building robust data pipelines, extracting insights from datasets, or creating cutting-edge machine learning models, this session will help you understand where you might fit in this diverse landscape.
Unlock your MLOps potential with Azure Machine Learning Studio
MLOps is essential for bridging the gap between data science and operations, enabling organizations to deliver reliable and scalable machine learning solutions that drive real-world impact.
In this session, we will explore the powerful capabilities of Azure Machine Learning Studio (AML) and learn how to leverage its features to build robust and efficient end-to-end MLOps solutions. AML provides a comprehensive set of tools and services to support the entire ML lifecycle, from data preparation and model training to deployment and monitoring including advanced deployment techniques such as green/ blue deployment.
By the end of this session, you will possess a comprehensive understanding of Azure ML Studio and its potential for constructing end-to-end MLOps solutions. Whether you are a data scientist, ML engineer, or a DevOps professional, this session is designed to equip you with the essential tools, knowledge, and best practices needed to harness the true power of Azure ML Studio and drive successful MLOps initiatives in your organization.
Beyond the Model: POC to Production
"Despite the growing adoption of machine learning, research has shown that as many as 50-90% of machine learning models fail to make it into production due to a lack of planning, inadequate data, and the complexity of the models themselves."
This session is designed to provide attendees with an in-depth understanding of how to take a model from POC to production following the entire machine learning lifecycle. From model training to deployment and monitoring, covering all essential topics including feature store, MLflow, testing for accuracy and fairness, code vs model deployment, multiple patterns for batch and real-time deployment, and monitoring for drift.
During this day-long session, attendees will learn about the latest tools available in Databricks and Azure and hints and tips for best practices at every step in the process. They will also have the opportunity to engage in hands-on exercises and real-world examples to reinforce their understanding of the concepts discussed.
After this session, attendees will be equipped with the knowledge and practical skills needed to run successful MLOps projects and overcome the productionisation challenges faced by the industry.
Getting started with MLOps in Azure
We are being asked more and more to work with various aspects of data, regardless of our core skill set. This is particularly the case when productionising Machine Learning Models.
In this session we will talk about various Azure technologies we can use in our day jobs to achieve this, including ML Studio, Databricks & AKS.
We will look at the various components needed and different architectures that can be implemented, including how to manage Feature Stores and monitoring Model Life Cycles.
When working with data it is vital that different Data Professionals, Software Developers/Engineers & others in tech work in harmony together for successful outcomes. Consequently we will also cover how we can try to best achieve this, and how this has been done in the real world.
Detecting and Managing all types of Model Drift
Over time, machine learning models will degrade for a number of reasons. Maybe you have a book recommendation model but your customers preferences are changing, or maybe your customers behaviour has changed since the Covid-19 Lockdown. In this talk, I will cover the 4 types of model decay and the steps you can take to detect and mitigate against them.
Data Science and Analytics from the Trenches: Real-World Experience from Diverse voices in the field
In this session, we will cut through the marketing buzzwords to share experiences, tips, and tricks on how to be successful with Data Science and Analytics in the real world. Tune in to hear the team share real-world experience and get takeaways from industry insiders on real projects with impact. We will also discuss the ethics and fairness of Data Science and Analytics projects and how we can be more inclusive from a technology, people, and process standpoint.
Join this lively and interactive session to hear from the speakers to learn practical examples on how to be a more successful data scientist. Bring your questions for discussion!
Want End-to-End MLOps? Look no further than Databricks!
Arguably the largest challenge in ML today is effectively deploying reliable and efficient models into production, with experts quoting that as many as 90% of model created never make it to production. MLOps streamlines the process of taking machine learning models to production, and then maintaining and monitoring them. With new MLOps micro-venders popping up every day, is there a tool that does everything?
In this session, we will consider Databricks as an end-to-end MLOps tool, exploring collaborative workspaces, feature stores, model registries and model serving. We will also touch upon other critical MLOps practices such as model fairness, explainability and monitoring.
Including practical demos of Databricks Feature Store, MLFlow and real-time Model Serving, this session is suitable for Data Scientists and Machine Learning Engineers of all levels.
Empowering MLOps with Feature Stores
One rising challenge in ML is how can we manage and serve features at scale, enabling data scientists and engineers to efficiently create, store, and share features across different stages of the machine learning pipeline.
In this session, we will delve into the world of Feature Stores and their emerging role in MLOps. We will explore important concepts including feature engineering, feature versioning, feature serving, and feature metadata management. With practical demos in two leading Feature Store implementations, Databricks and Feast, we will explore the benefits, best practices, common challenges and pitfalls and how to address them.
Whether you are a data scientist, machine learning engineer, or data engineer, this session will provide valuable insights and practical demos to help you harness the power of Feature Stores in your organisation's MLOps journey.
DataPopkorn - a bite-sized knowledge! (2025 - leaf edition) Sessionize Event Upcoming
DATA:Scotland 2025 Sessionize Event
SQLBits 2025 - General Sessions Sessionize Event
Data Toboggan - Winter Edition 2025 Sessionize Event
DataPopkorn - a bite-sized knowledge! Sessionize Event
Data Relay 2024 Sessionize Event
DATA:Scotland 2024 Sessionize Event
Data Toboggan - Cool Runnings 2024 Sessionize Event
SQLBits 2024 - General Sessions Sessionize Event
Data Toboggan - Winter Edition 2024 Sessionize Event
DATA:Scotland 2023 Sessionize Event
Data Toboggan - Cool Runnings 2023 Sessionize Event
Southampton Data Platform and Cloud user group - in-person meetup User group Sessionize Event
Dativerse #2 Sessionize Event
Data Relay 2022 Sessionize Event
SQLBits 2022 Sessionize Event
Tori Tompkins
Principal AI Consultant at Advancing Analytics
London, United Kingdom
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top