Speaker

Rajkumar Sakthivel

Rajkumar Sakthivel

✨Expert in AI-Powered Ops & App Development | Global Public Speaker | Tesco Technology

London, United Kingdom

Actions

✨Specialist in transforming LLM applications with private cloud operations. Oxford-educated innovator and dynamic speaker on AI-driven operations, private cloud strategies, and digital transformation.

Area of Expertise

  • Finance & Banking
  • Information & Communications Technology

Topics

  • PHP
  • DevOps
  • Data Science & AI
  • minimalism
  • LLMs
  • LLMOps
  • MLOps
  • AIOps
  • python
  • public speaking
  • Artificial Inteligence
  • Artificial Intelligence (AI) and Machine Learning
  • Machine Learning & AI
  • Software Development
  • Observability
  • Monitoring & Observability
  • Software Architecture & Scalability

Data Mesh Architecture with Microsoft Fabric

In the rapidly evolving world of data management, the Data Mesh approach is emerging as a transformative force. We will explore this new method that makes working with data easier and more effective, allowing different teams in an organization to manage their data in their own way, while still keeping everything organized and secure.

This session dives deep into the principles of Data Mesh and its practical implementation using Microsoft Fabric.

Building AI Systems: From Concept to Deployment

Explore how AI can enhance our work. Learn how to integrate generative AI with traditional machine learning, unlocking smarter automation and creativity. You’ll discover practical ways to use AI while keeping your data secure and well managed.

Dive into hands on activities, including vector search for better data retrieval and embedding models for improved recommendations. Experiment with AI in a live playground and learn how to bring open source models into your projects. These techniques will help you build more powerful, intelligent applications.

Whether you're new to AI or experienced, this session offers valuable insights. Walk away with the skills to apply generative AI in real world scenarios

Fast-Track Analytics Modernization with Microsoft Fabric: Enterprise-grade Governance & Scalability

This intensive one-day workshop combines the best of both worlds "modern data engineering and AI-driven analytics" using Microsoft Fabric’s unified platform. Designed for data professionals. Whether you're a data engineer or data scientist, you’ll leave with practical skills to accelerate your knowledge in fabric.

Morning Track: Modern Data Engineering with Microsoft Fabric
Dive into Microsoft Fabric’s data engineering capabilities, from seamless data integration to cloud scale analytics. Through interactive labs, you’ll:
- Master data ingestion and transformation using Fabric’s Data Engineering experience.
- Work with Apache Spark and Delta Lakes for optimized data processing.
- Explore best practices for building efficient, governed data pipelines.

Afternoon Track: Data Insights with AI in Microsoft Fabric
Shift focus to AI-driven analytics, learning how to transform raw data into actionable intelligence. Key activities include:
- Preparing data and training machine learning models within Fabric.
- Implementing AI-powered insights using built-in and custom solutions.
- Applying real-world use cases to enhance decision-making.

From SQL to Insights: Automating Data Storytelling with Azure OpenAI and Langchain

In this session, we'll explore how Azure OpenAI and Langchain can change the data storytelling by transforming SQL queries into insights. We'll demonstrate how natural language prompts are converted into SQL queries, enabling automated retrieval of data from databases and turning them into compelling narratives. This presentation will provide a step by step guide to integrating these technologies, showcase a live demo, and highlight best practices for creating AI driven data stories. Attendees will discover how this approach empowers both technical and non technical users to interact with data more effectively.

Retrieval Augmented Generation (RAG) with Azure AI Search

In this presentation, we will explore the concept of Retrieval Augmented Generation (RAG) and how it can leveraged particularly in the context of enterprise solutions. RAG architecture provides the ability to create generative AI to your own proprietary content, which ensure that the information used to generate responses is tailored to specific business needs.

The key to a successful RAG architecture is the information retrieval system, which determines the inputs to the LLM. We will discuss the critical requirements for this system, including scalable indexing strategies, relevant query capabilities, security, global reach, and seamless integration with embedding and chat/language models. Azure AI Search emerges as the ideal solution. Microsoft offers several built-in approaches for using Azure AI Search in a RAG solution, such as Azure AI Studio, Azure OpenAI Studio, and Azure Machine Learning. We will also explore a custom RAG pattern that gives you more control over the architecture.

Transforming Data into Insights with Agentic AI: Azure OpenAI & LangChain

In this session, we’ll showcase how Agentic AI powered by Azure OpenAI and LangChain revolutionizes data storytelling by autonomously converting natural language prompts into SQL queries, extracting insights, and crafting compelling narratives. Watch as AI agents intelligently bridge the gap between raw data and actionable stories, reducing manual effort while enhancing accuracy and speed.

We’ll provide a step by step guide to deploying these self directed AI workflows, including a live demo where an AI agent interprets user intent, generates optimized queries, and structures insights into clear, engaging narratives. Learn best practices for implementing AI driven autonomy in data analysis, empowering both technical and non-technical teams to interact with data seamlessly.

By the end, you’ll see how Agentic AI doesn’t just assist it takes initiative, turning complex datasets into strategic stories with minimal human intervention

Minimalism in DevOps

Minimalism is defined as a design or style in which the simplest and fewest elements are used to create the maximum effect.

In this talk, let us explore minimalism in our DevOps practice. We will discuss how to achieve optimal DevOps Practice.

You should think about infrastructure through experience and understanding to achieve the essence of DevOps. It doesn't mean cutting corners. It's about the right resources in the right place.

I’m not looking to make you a DevOps Ninja, but understanding each of these, along with the benefits each can bring, can help you and your team make the right decisions for the project you are creating and ensure a smoother project lifecycle.

Transforming automation with AI-first workflows

Discover how agent flows in Microsoft Copilot Studio revolutionize workflow automation by combining structured processes with AI driven intelligence. Agent flows enable businesses to automate complex, deterministic tasks such as tax audits, invoice processing, and approvals with speed, consistency, and enterprise grade scalability.

With Copilot Studios users can design, deploy, and reuse agent flows across multiple scenarios without deep technical expertise. Makers can describe their intent in natural language, and Copilot generates the workflow steps automatically integrating AI document processing, generative actions, and human in the loop approvals. The result is seamless automation that evolves alongside business demands, all within a unified platform.

Join this session to explore real world applications, from finance operations to cross team collaboration, and learn how agent flows accelerate end to end automation. Discover best practices for building intelligent workflows that enhance productivity

Low Code and AI: Pro Developer Productivity

Discover how AI and low code development can transform your organization’s approach to software solutions. Learn how automated coding, intelligent debugging, and natural language app building accelerate delivery while reducing frustration for developers. See real-world examples of businesses leveraging these technologies to overcome talent gaps and scale their digital transformation efforts.

Join this session to explore best practices for integrating AI and low-code into your development strategy increasing speed, reducing costs, and unlocking new opportunities for innovation.

LLMOps: Operationalizing Large Language Models for the Real World

Large Language Models (LLMs) like OpenAI GPT,/Groovy Google Gemini, and Databricks DBRX are transforming industries, but effectively deploying and managing them requires more than traditional machine learning practices. LLMOps is a specialized set of techniques, tools, and workflows designed to tackle the unique challenges of working with LLMs in production. This presentation will explore what makes LLMOps distinct, why its essential, and how it enables organizations to harness the power of LLMs efficiently, at scale, and with reduced risks.

Introduction to LLMOps: Overview of its components from data preparation to deployment and monitoring
MLOps to LLMOps: Key differences including computational demands, fine-tuning with human feedback, and prompt engineering
Challenges & Solutions: Addressing LLM-specific issues like inference cost, model drift, and hallucination
Best Practices: Insights into data prep, governance, CI/CD pipelines, and model monitoring
Platform Tools: Exploring platforms like MLflow and Databricks for effective LLMOps implementation

Fast-track your DataOps expertise in AI era

In todays AI driven landscape, efficient DataOps is the backbone of scalable and reliable data pipelines. This session will equip you with cutting-edge strategies to accelerate your DataOps workflows using Azure and Databricks, enabling seamless data integration, processing, and ML deployment. Learn how to automate data governance, optimize performance, and reduce time-to-insight critical skills for thriving in the AI era.

Discover best practices for orchestrating end
to end data pipelines, from ingestion to analytics, with Azure Synapse, Delta Lake, and Databricks workflows. We will cover how to leverage AI powered automation for monitoring, debugging, and scaling your DataOps infrastructure, ensuring agility without compromising security or compliance.

Whether you’re a data engineer, analyst, or AI practitioner, this session will provide actionable insights to streamline your data operations. Walk away with proven techniques to future-proof your DataOps strategy and harness the full potential of Azure and Databricks in an AI first world

Fast track development with low-code and AI

Discover how agent flows in Microsoft Copilot Studio revolutionize workflow automation by combining structured processes with AI driven intelligence. Agent flows enable businesses to automate complex, deterministic tasks such as tax audits, invoice processing, and approvals with speed, consistency, and enterprise grade scalability.

With Copilot Studios users can design, deploy, and reuse agent flows across multiple scenarios without deep technical expertise. Makers can describe their intent in natural language, and Copilot generates the workflow steps automatically integrating AI document processing, generative actions, and human in the loop approvals. The result is seamless automation that evolves alongside business demands, all within a unified platform.

Join this session to explore real world applications, from finance operations to cross team collaboration, and learn how agent flows accelerate end to end automation. Discover best practices for building intelligent workflows that enhance productivity

Operationalizing Generative AI: From Concept to Productionizing LLM Solutions with Azure AI Foundry

Take your generative AI projects from experimentation to production with this comprehensive workshop focused on Microsoft's Azure AI Foundry. This hands-on session will guide you through the complete lifecycle of building and deploying enterprise-grade LLM solutions.

We will start with foundational concepts and progress to advanced implementation techniques covering the following:
- End-to-end LLM development workflows in Azure AI Foundry
- Model selection strategies (GPT, Deepseek, Llama) and optimization approaches
- Building production-ready pipelines for fine-tuning and Agentic AI solution implementations
- Deployment architectures for scalable, secure LLM applications

Through interactive labs and real-world case studies, you'll gain practical experience with:
- Azure AI Studio for rapid prototyping and experimentation
- Prompt Flow for building reproducible LLM pipelines
- Model evaluation and continuous monitoring best practices
- Cost optimization and performance tuning techniques
- Implementing responsible AI safeguards and governance controls

Why and How to Scale your app from single VM to complex multi cloud

When you start building an app, probably you run it on a single VPS VM. In this session we will discuss the luxury problem of scaling your app from single box to complex infra with all the bells and whistles. We will try to understand what problem requires what kind of scaling solution.

We will cover the following
* Quickly bootstrapping simple app
* Adding various infra components such as load balancer, db clusters, cloud services, message queues, disaster recovery, multi cloud, best practices, well architected framework and much more
* Right sizing and cost optimization
* A la carte vs fully managed PAAS
* App optimizations and re-architecture

Evolving from MLOps to LLMOps - Architectures and Best Practices

This session explore how AI operations are evolving from traditional MLOps to the new world of LLMOps. As large language models transform how we build AI systems, we'll break down what is the different and what stays the same when operationalizing these powerful models.

You will learn practical architectures for managing the complete lifecycle, from data preparation and model training to deployment and monitoring. We'll compare standard MLOps workflows with the new requirements of LLMOps, including prompt management, output validation, and cost optimization for large scale models.

Using real world examples with Databricks and MLflow, we will show how to implement these approaches effectively. Whether you're working with traditional machine learning models or cutting edge LLMs, you'll leave with actionable strategies to streamline your AI operations and deployment pipelines

Solution Showcase: How PostgreSQL Turns Agentic Workflows into Enterprise Reality

Agentic AI (AI that can make decisions and improve on its own) sounds exciting, but many companies struggle to use it because it’s hard to scale, expensive, and tricky to connect with existing systems. In this session, you’ll see how PostgreSQL helps turn this advanced AI into real, working solutions used by large retail and supply chain enterprise companies.

See a live demo of how industries are putting PostgreSQL AI agents to work, solving unique challenges and unlocking new opportunities. they're real-world applications that are changing the way businesses operate and compete in data-driven markets.

Solution Showcase: How PostgreSQL Turns Agentic Workflows into Enterprise Reality

Agentic AI (AI that can make decisions and improve on its own) sounds exciting, but many companies struggle to use it because it’s hard to scale, expensive, and tricky to connect with existing systems. In this session, you’ll see how PostgreSQL helps turn this advanced AI into real, working solutions used by large retail and supply chain enterprise companies.

See a live demo of how industries are putting PostgreSQL AI agents to work, solving unique challenges and unlocking new opportunities. they're real-world applications that are changing the way businesses operate and compete in data-driven markets.

Challenges and Solutions for ML, LLM, and Agentic Deployments

As enterprises race to adopt AI technologies, they face a complex set of challenges across the lifecycle of machine learning (ML), large language models (LLMs), and agentic systems. This panel brings together experts to explore the current state of AI in the enterprise, highlighting real-world use cases and transformative potential. Panelists will dive into critical issues such as securing autonomous agents, explaining AI behavior to non-technical stakeholders, building infrastructure for scalable LLM deployments, and maintaining ML model performance over time. Whether you’re just starting your AI journey or looking to refine your deployment strategy, this session offers practical insights, emerging best practices, and strategic guidance for navigating the fast-evolving AI landscape.

Transforming AI Development with Multi-Agentic Frameworks

Discover how these advanced technologies work together to create intelligent, adaptable, and collaborative AI agents capable of tackling complex, real-world challenges. Whether you're developing applications for automation, decision-making, or user interaction, this framework unlocks new possibilities for seamless multi-agent orchestration.

Explore how the Semantic Kernel empowers agents to comprehend and process complex data structures, enabling sophisticated reasoning and decision-making. Paired with AutoGen, the framework provides a robust system for creating, managing, and deploying multiple agents working collaboratively to achieve shared objectives. You’ll gain insights into how these tools simplify development, allowing you to focus on innovation rather than technical complexities.

This session highlights the key components of a multi-agent ecosystem, including skill definition, workflow customization, and agent interaction. Learn how Retrieval-Augmented Generation (RAG) enhances information retrieval and decision-making, while knowledge graphs provide a structured backbone for efficient data organization and querying. These tools work in harmony to deliver agents that are both intelligent and efficient, designed for scalability and adaptability.

Whether you're an AI enthusiast, developer, or strategist, this session will provide you with actionable insights into leveraging multi-agent frameworks. Join us to explore the future of AI collaboration, where intelligent agents work together to create transformative solutions tailored to the demands.

Azure AI Foundry : Generative AI Development Hub

Azure AI Foundry provides user-friendly platforms that simplify the process of building Generative Artificial Intelligence (AI) applications. This session will guide you through building generative AI solutions using language models and prompt flow, enabling your applications to deliver meaningful value to users. Explore the Azure AI Foundry model catalog, which features a variety of open-source models, and learn how to select, deploy, and fine-tune a model to meet your specific needs.
Finally, explore how to evaluate and optimize your generative AI applications using Azure AI Studio. Learn techniques to test performance, refine user experiences, and ensure your applications deliver accurate responses consistently.

Are You a Modern Software Engineer?

In a modern world with so many different ways to build web applications, how do we keep up with the latest trend, let alone know which one should we use?

With all these choices, it can be easy to start off on the wrong foot or to be put off from exploring other options besides what we are used to working with.

In this talk, I want to explore the best practices we have when building the foundations of our projects.

I’m not looking to make you a Coding Ninja, but having an understanding of each of these, along with the benefits each can bring, can help you and your team make the right decisions for the project you are creating and ensure a smoother project lifecycle.

Hands-On: Build an MS Fabric Real-Time Intelligence Solution

Dive into the world of real-time Intelligence with this hands-on workshop, where you'll learn to build an end-to-end real-time intelligence solution using Microsoft Fabric. Tailored for professionals managing high-traffic platforms, this session will guide you step-by-step through leveraging MS Fabric's powerful tools and features to analyze real-time data and generate insights faster.

Key takeaways include:

Designing a Real-Time Intelligence Solution using Fabric Real-Time Intelligence.
Querying data instantly with Fabric shortcuts, eliminating the need for copying or moving data.
Streaming real-time events into Fabric Eventhouse via Eventstream.
Transforming streaming data with the power of Kusto Query Language (KQL).
Accessing data seamlessly through OneLake for real-time insights.
Building dynamic visualizations with real-time dashboards.
Automating alerts and reflex actions using Data Activator.

LLMSecOps – Building Secure and Reliable AI Applications

In this hands-on session, we will dive into the world of LLMSecOps (Large Language Model Security Operations), which focuses on the critical security aspects of building and deploying Large Language Model (LLM) applications. Unlike traditional development, LLM applications face unique security risks that require a systematic approach to address security at every phase— from design through to post-deployment.

Throughout this interactive lab, you will gain practical experience leveraging LLMSecOps principles to build reliable and secure intelligent applications. By the end of the session, you will be equipped with the skills to apply LLMSecOps best practices, helping you secure and enhance the intelligence of your AI applications

Gen AI Ops: Operationalizing Gen AI for the Real World

Gen AI Models like OpenAI GPT,/Groovy Google Gemini, and Databricks DBRX, Deepseek are transforming industries, but effectively deploying and managing them requires more than traditional machine learning practices. Gen AI Ops is a specialized set of techniques, tools, and workflows designed to tackle the unique challenges of working with LLMs in production. This presentation will explore what makes Gen AI Ops distinct, why its essential, and how it enables organizations to harness the power of LLMs efficiently, at scale, and with reduced risks.

From SQL to Insights: Automating Data Storytelling with Azure OpenAI and Langchain

In this session, we'll explore how Azure OpenAI and Langchain can change the data storytelling by transforming SQL queries into insights. We'll demonstrate how natural language prompts are converted into SQL queries, enabling automated retrieval of data from databases and turning them into compelling narratives.

Are You a Modern DevOps Engineer?

In a modern world with so many different ways to set up/build web applications using DevOps, how do we keep up with the latest trend, let alone know which one should we use?

With all these choices, it can be easy to start on the wrong foot or avoid exploring other options besides what we are used to working with.

In this talk, I want to explore the best practices we have when building the foundations of our DevOps projects.

I’m not looking to make you a DevOps Ninja, but understanding each of these, along with the benefits each can bring, can help you and your team make the right decisions for the project you are creating and ensure a smoother project lifecycle.

LLMOps: Operationalizing Large Language Models for the Real World

Large Language Models (LLMs) like OpenAI GPT,/Groovy Google Gemini, and Databricks DBRX are transforming industries, but effectively deploying and managing them requires more than traditional machine learning practices.

LLMOps is a specialized set of techniques, tools, and workflows designed to tackle the unique challenges of working with LLMs in production.

This presentation will explore what makes LLMOps distinct, why its essential, and how it enables organizations to harness the power of LLMs efficiently, at scale, and with reduced risks.

Hacking Parenthood: A Software Engineer’s Guide to Raising Future-Ready Kids

Balancing the demands of a software engineering career with the responsibilities of parenthood is no small feat. This talk is designed to provide actionable strategies for raising confident, adaptable, and future-ready children in the midst of a busy life.

Tailored specifically for software engineer families, we'll explore how to optimize daily routines, foster meaningful connections, and integrate work-life balance. Whether you're writing code or guiding your children through their formative years, this session will offer practical tips to help you thrive in both worlds.

Michigan Technology Conference 2025 Sessionize Event

March 2025 Pontiac, Michigan, United States

National DevOps Conference 2024

Power of Minimalism in DevOps 2024

October 2024 London, United Kingdom

DevOps Oxford

Minimalism in DevOps 2024

October 2024 Oxford, United Kingdom

PHP Sussex & PHP Oxford

A Software Engineer’s Guide to Raising Future-Ready Kids

September 2024 Brighton, United Kingdom

PHP Stoke

Minimalism in DevOps 2023

August 2023 Stoke-on-Trent, United Kingdom

National DevOps Conference 2021

Are You a Modern DevOps Engineer?

October 2021 London, United Kingdom

London Java Community & PHP Vegas

Modern Software Developer Best Practices 2020

June 2020 London, United Kingdom

Rajkumar Sakthivel

✨Expert in AI-Powered Ops & App Development | Global Public Speaker | Tesco Technology

London, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top