

Rajkumar Sakthivel
✨Expert in AI-Powered Ops & App Development | Public Speaker | Tesco Technology
London, United Kingdom
Actions
✨Specialist in transforming LLM applications with private cloud operations, with a proven track record across the UK, Europe, and APEC. Oxford-educated innovator and dynamic speaker on AI-driven operations, private cloud strategies, and digital transformation.
Links
Area of Expertise
Topics
Transforming AI Development with Multi-Agentic Frameworks
Discover how these advanced technologies work together to create intelligent, adaptable, and collaborative AI agents capable of tackling complex, real-world challenges. Whether you're developing applications for automation, decision-making, or user interaction, this framework unlocks new possibilities for seamless multi-agent orchestration.
Explore how the Semantic Kernel empowers agents to comprehend and process complex data structures, enabling sophisticated reasoning and decision-making. Paired with AutoGen, the framework provides a robust system for creating, managing, and deploying multiple agents working collaboratively to achieve shared objectives. You’ll gain insights into how these tools simplify development, allowing you to focus on innovation rather than technical complexities.
This session highlights the key components of a multi-agent ecosystem, including skill definition, workflow customization, and agent interaction. Learn how Retrieval-Augmented Generation (RAG) enhances information retrieval and decision-making, while knowledge graphs provide a structured backbone for efficient data organization and querying. These tools work in harmony to deliver agents that are both intelligent and efficient, designed for scalability and adaptability.
Whether you're an AI enthusiast, developer, or strategist, this session will provide you with actionable insights into leveraging multi-agent frameworks. Join us to explore the future of AI collaboration, where intelligent agents work together to create transformative solutions tailored to the demands.
Azure AI Foundry : Generative AI Development Hub
Azure AI Foundry provides user-friendly platforms that simplify the process of building Generative Artificial Intelligence (AI) applications. This session will guide you through building generative AI solutions using language models and prompt flow, enabling your applications to deliver meaningful value to users. Explore the Azure AI Foundry model catalog, which features a variety of open-source models, and learn how to select, deploy, and fine-tune a model to meet your specific needs.
Finally, explore how to evaluate and optimize your generative AI applications using Azure AI Studio. Learn techniques to test performance, refine user experiences, and ensure your applications deliver accurate responses consistently.
Are You a Modern Software Engineer?
In a modern world with so many different ways to build web applications, how do we keep up with the latest trend, let alone know which one should we use?
With all these choices, it can be easy to start off on the wrong foot or to be put off from exploring other options besides what we are used to working with.
In this talk, I want to explore the best practices we have when building the foundations of our projects.
I’m not looking to make you a Coding Ninja, but having an understanding of each of these, along with the benefits each can bring, can help you and your team make the right decisions for the project you are creating and ensure a smoother project lifecycle.
Hands-On: Build an MS Fabric Real-Time Intelligence Solution
Dive into the world of real-time Intelligence with this hands-on workshop, where you'll learn to build an end-to-end real-time intelligence solution using Microsoft Fabric. Tailored for professionals managing high-traffic platforms, this session will guide you step-by-step through leveraging MS Fabric's powerful tools and features to analyze real-time data and generate insights faster.
Key takeaways include:
Designing a Real-Time Intelligence Solution using Fabric Real-Time Intelligence.
Querying data instantly with Fabric shortcuts, eliminating the need for copying or moving data.
Streaming real-time events into Fabric Eventhouse via Eventstream.
Transforming streaming data with the power of Kusto Query Language (KQL).
Accessing data seamlessly through OneLake for real-time insights.
Building dynamic visualizations with real-time dashboards.
Automating alerts and reflex actions using Data Activator.
LLMSecOps – Building Secure and Reliable AI Applications
In this hands-on session, we will dive into the world of LLMSecOps (Large Language Model Security Operations), which focuses on the critical security aspects of building and deploying Large Language Model (LLM) applications. Unlike traditional development, LLM applications face unique security risks that require a systematic approach to address security at every phase— from design through to post-deployment.
Throughout this interactive lab, you will gain practical experience leveraging LLMSecOps principles to build reliable and secure intelligent applications. By the end of the session, you will be equipped with the skills to apply LLMSecOps best practices, helping you secure and enhance the intelligence of your AI applications
Gen AI Ops: Operationalizing Gen AI for the Real World
Gen AI Models like OpenAI GPT,/Groovy Google Gemini, and Databricks DBRX, Deepseek are transforming industries, but effectively deploying and managing them requires more than traditional machine learning practices. Gen AI Ops is a specialized set of techniques, tools, and workflows designed to tackle the unique challenges of working with LLMs in production. This presentation will explore what makes Gen AI Ops distinct, why its essential, and how it enables organizations to harness the power of LLMs efficiently, at scale, and with reduced risks.
From SQL to Insights: Automating Data Storytelling with Azure OpenAI and Langchain
In this session, we'll explore how Azure OpenAI and Langchain can change the data storytelling by transforming SQL queries into insights. We'll demonstrate how natural language prompts are converted into SQL queries, enabling automated retrieval of data from databases and turning them into compelling narratives.
LLMOps: Operationalizing Large Language Models for the Real World
Large Language Models (LLMs) like OpenAI GPT,/Groovy Google Gemini, and Databricks DBRX are transforming industries, but effectively deploying and managing them requires more than traditional machine learning practices. LLMOps is a specialized set of techniques, tools, and workflows designed to tackle the unique challenges of working with LLMs in production. This presentation will explore what makes LLMOps distinct, why its essential, and how it enables organizations to harness the power of LLMs efficiently, at scale, and with reduced risks.
Introduction to LLMOps: Overview of its components from data preparation to deployment and monitoring
MLOps to LLMOps: Key differences including computational demands, fine-tuning with human feedback, and prompt engineering
Challenges & Solutions: Addressing LLM-specific issues like inference cost, model drift, and hallucination
Best Practices: Insights into data prep, governance, CI/CD pipelines, and model monitoring
Platform Tools: Exploring platforms like MLflow and Databricks for effective LLMOps implementation
Are You a Modern DevOps Engineer?
In a modern world with so many different ways to set up/build web applications using DevOps, how do we keep up with the latest trend, let alone know which one should we use?
With all these choices, it can be easy to start on the wrong foot or avoid exploring other options besides what we are used to working with.
In this talk, I want to explore the best practices we have when building the foundations of our DevOps projects.
I’m not looking to make you a DevOps Ninja, but understanding each of these, along with the benefits each can bring, can help you and your team make the right decisions for the project you are creating and ensure a smoother project lifecycle.
Power of Minimalism in DevOps
This public talk explores the concept of minimalism in DevOps and its benefits. DevOps is centred around continuous integration and delivery, requiring constant communication and collaboration between development and operations teams. Minimalism in DevOps refers to keeping processes, tools, and workflows as simple as possible to achieve these goals.
The talk will cover the benefits of adopting a minimalist approach in DevOps, such as increased efficiency, improved reliability, and reduced complexity. It will also address common challenges in implementing minimalism and provide practical tips for successful adoption.
Attendees will gain actionable insights into optimizing CI/CD pipelines with tools like Jenkins and GitLab CI, managing infrastructure using Terraform and AWS CloudFormation, and simplifying operations through automation and cloud services like AWS Lambda and GKE. Whether working in cloud environments, SaaS, or e-commerce, attendees will leave with a deeper understanding of how minimalism can enhance their DevOps practices and how to implement it effectively.
Hacking Parenthood: A Software Engineer’s Guide to Raising Future-Ready Kids
Balancing the demands of a software engineering career with the responsibilities of parenthood is no small feat. This talk is designed to provide actionable strategies for raising confident, adaptable, and future-ready children in the midst of a busy life.
Tailored specifically for software engineer families, we'll explore how to optimize daily routines, foster meaningful connections, and integrate work-life balance. Whether you're writing code or guiding your children through their formative years, this session will offer practical tips to help you thrive in both worlds.
LLMOps: Operationalizing Large Language Models for the Real World
Large Language Models (LLMs) like OpenAI GPT,/Groovy Google Gemini, and Databricks DBRX are transforming industries, but effectively deploying and managing them requires more than traditional machine learning practices.
LLMOps is a specialized set of techniques, tools, and workflows designed to tackle the unique challenges of working with LLMs in production.
This presentation will explore what makes LLMOps distinct, why its essential, and how it enables organizations to harness the power of LLMs efficiently, at scale, and with reduced risks.
Michigan Technology Conference 2025 Sessionize Event
National DevOps Conference 2024
Power of Minimalism in DevOps 2024
DevOps Oxford
Minimalism in DevOps 2024
PHP Sussex & PHP Oxford
A Software Engineer’s Guide to Raising Future-Ready Kids
PHP Stoke
Minimalism in DevOps 2023
National DevOps Conference 2021
Are You a Modern DevOps Engineer?
London Java Community & PHP Vegas
Modern Software Developer Best Practices 2020

Rajkumar Sakthivel
✨Expert in AI-Powered Ops & App Development | Public Speaker | Tesco Technology
London, United Kingdom
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top