Rahul Rai

Rahul Rai

Group Product Manager, LogicMonitor | MVP Microsoft Azure

Sydney, Australia

I am a self-driven technology leader with proven skills in leadership, decision-making, and quick learning. I have over 13 years of hands-on experience in cloud and web technologies. I apply emerging technologies and processes to bring efficiency to enterprise technology operations. As a leader, I have successfully established and led engineering teams and designed enterprise applications to solve organizational challenges. I foster innovation, collaboration and bring improvement to engineering teams.

I have authored three books and a free workshop on Microservices orchestrators and Service Mesh: Azure Service Fabric- Microservices with Azure, by Packt, Kubernetes- Kubernetes Succinctly, Istio- Istio Succinctly, by Syncfusion, and Fast Track Istio Workshop on Katacoda. I am an active Microsoft MVP (Most Valuable Professional), and DZone MVB (Most Valuable Blogger). You can connect with me through my blog: https://thecloudblog.net.


Area of Expertise

  • Information & Communications Technology


  • Azure
  • Kubernetes
  • ServiceMesh
  • Istio

Cloud-Native Kubernetes Workflows on AKS with Argo

Kubernetes is the most popular container orchestrator available today. Building workflows is one of the key requirements that we, as developers, fulfill for the businesses. As businesses are rapidly turning to Kubernetes to host their workloads, the Kubernetes community has produced a Kubernetes native service named Argo that addresses the need for hosting workflows on Kubernetes.

Argo is a Cloud-Native Cloud Foundation (CNCF) project developed to solve many common business problems with ease when dealing with Kubernetes based deployment environment. It has four sub-components, i.e., Argo workflows, Argo CD, Argo Events, and Argo Rollouts. Each of the individual components can be used independently of each other, which makes Argo very flexible.

Some of the main advantages of Argo Workflows are:
• Natively designed for Containers
• Can run on any Kubernetes environment.
• Schedule workflows with just YAML configurations.
• Argo Workflows puts a cloud-scale supercomputer at your fingertips!

In this session, we will cover an introduction to Argo and its main components. We will also cover some examples of Argo workflows with several demos.

Value to Attendees

The following are the key takeaways from the session.

1. Learn to build Argo workflows in Kubernetes.
2. Remove custom tweaks in source code and Kubernetes setup and configure workflow entirely using YAML configurations.
3. Configure blue/green deployments, A/B testing using a lightweight component.
4. Configure event-based workflows using internal events or external services using webhooks.

Deployment Certifications at Scale on AKS with Azure Policy and Azure Functions

Do you know that your Kubernetes cluster can talk to you? And that too, with voice or SMS messages?

Controlling resource deployments on an Azure Kubernetes Service (AKS) cluster can quickly become challenging, particularly when multiple Continuous Delivery pipelines target the same cluster. In such scenarios, you want to build smarts within the Kubernetes cluster to admit or deny pods that do not meet your admission criteria. With Azure Policy for AKS, you can write static admission rules for your Kubernetes cluster. But what if your admission demands are dynamic or you want to roll out custom admission policies? By creating custom admission webhooks for Kubernetes, you can define custom policies that regulate the deployment of resources to a cluster.


In this session, we will present how you can administer Azure Policies for AKS and subsequently build a serverless validating admission webhook with Azure Functions to apply governance policies on the deployments in Kubernetes. Azure Functions allow you to integrate with external services without writing a single line of integration code. We will use the Azure Function’s native Twilio binding to send SMS updates on Kubernetes deployments to the Ops/SRE team. After participating in this session, you will understand how easy it is to write custom validating webhooks for Kubernetes. Also, you will learn to build and deploy a serverless infrastructure to certify deployments at scale.

Fast Track Istio


The Fast Track Istio workshop will get developers up and running with Istio on a live Kubernetes cluster. Let us begin with understanding why enterprises need Service Meshes in the first place. Organizations all over the world are in love with microservices. Teams that adopt microservices have the flexibility to choose their tools and languages, and they can iterate designs and scale quickly. However, as the number of services in the organizations continue to grow, they face challenges that can be broadly classified into two categories:

• Orchestrate the infrastructure on which the microservices are deployed.
• Consistently implement the best practices of service-to-
service communication across microservices.

By adopting container orchestration solutions such as Docker Swarm, Kubernetes, and Marathon, developers gain the ability to delegate infrastructure-centric concerns to the hosting platform. With capabilities such as cluster management, scheduling, service discovery, application state maintenance, and host monitoring, the container orchestration platforms specialize in servicing layers 1–4 of the Open Systems Interconnection (OSI) network stack.

Almost all popular container orchestrators also provide some application life-cycle management (ALM) capabilities at layers 5–7, such as application deployment, application health monitoring, and secret management. However, often these capabilities are not enough to meet all the application-level concerns, such as rate-limiting and authentication.

Istio is an open-source service mesh that automatically adds the network capabilities that Microservices need without requiring developers to make any changes to the source code. Istio simplifies service to service communication, traffic ramping, fault tolerance, performance monitoring, tracing and much more.
In this workshop, participants will learn the fundamentals of Istio, its use cases, configurations, and learn how Istio can take care of almost all of the service management issues for new as well as existing applications by writing and applying configurations on the services. The workshop will cover the hands-on experience of building, deploying and managing applications with Istio on Kubernetes.

Format of the Workshop

We will use a set of simple Microservices applications that resemble real-world scenarios to explore the various nuances of Istio by deploying them to a live Kubernetes cluster. By working through the samples and exercises, the readers will get a thorough understanding of the features of Istio and its advantages. The workshop will be delivered in an easy to digest format over two days. On the first day, we will gain an understanding of the platform and its features, and deploy simple applications to Istio to understand the capabilities of the network APIs. On the second day, we will add traffic management, security policies, and monitoring capabilities to the sample applications.

Value to Developers

This workshop will help developers get familiar with the concepts of Istio and apply them to real-world scenarios. After completing the workshop, the participants will gain experience with the following:

1. The value proposition of Service meshes.
2. Manage inter-microservice communication.
3. Manage the security of microservices through the platform.
4. Configure observability of microservices.
5. Implement common microservice networking patterns.

Workshop Outline

Over two days, participants will build and experience the features of the Istio service mesh.

Day 1: Introduction to Istio
1. Service mesh
2. Use cases
3. Advantages of using Istio as a service mesh
4. Istio architecture
5. Istio components

Day 1: Hands-on Istio
1. Istio deployment on K8s
2. Istio deployment configurations
3. Istioctl client

Day 1: Deploying Application with Istio on Kubernetes Cluster
1. Configuration using Helm
2. Configuration using kubectl
3. Deploying Application

Day 2: Observability and Traffic Management
1. Networking API 1: Egress and ingress gateway
2. Networking API 2: Service entry, destination rule, virtual service
3. Monitoring on Istio: metrics, traces, logs

Day 2: Security
1. Authentication: mTLS, transport authentication, origin authentication
2. Authorization policy
3. Securing ingress

Day 2: Patterns
1. Canary deployments
2. A/B testing with Iter8
3. Implementing microservices patterns: timeouts, retry, circuit breakers, fault injection

Target Audience
• Engineers and DevOps professionals.
• Hands-on Engineering leaders.

System Requirements
• Laptop with network access
• Software installed (see below)

Computer setup
• Windows 10 Pro with Hyper-V and Docker for Windows (with Kubernetes cluster setup)
• Macs with Docker for Mac installed (with Kubernetes cluster setup)
• Visual Studio Code

Supplementary Links
• Docker for Windows: https://docs.docker.com/docker-for-windows/
• Docker For Mac: https://docs.docker.com/docker-for-mac/
• Istio: https://istio.io/

Using Azure Function for Dynamic Admission Control in Kubernetes

Kubernetes version 1.9 introduced two code packages that allow you to write custom admission plugins: ValidatingAdmissionWebhook and MutatingAdmissionWebhook. These plugins give you a great deal of flexibility to integrate directly into the resource admission process.

In this session, we will write some validating admission webhooks with Azure Functions and use them to apply governance policies on the deployments in Kubernetes. After completing this session, you will understand how easy it is to write custom validating webhooks for Kubernetes. You will be ready to automate your existing organizational deployment policies and certify deployments at scale in Kubernetes.

The Deployment Architecture of an Enterprise API Management Platform on AKS

API Management is increasingly accepted as an essential part of any enterprise API program. It is also a key enabler of digital strategies. In a microservices architecture, you need a central hub for users to interact with services rather than having them face the complexity of individual services. A mature API management platform like Azure API Management (APIM) service consists of a set of tools that help you aggregate APIs and provides several other functions such as caching, request and response transformation, bundling responses, and versioning.

In this fast-paced session, we will discuss and demonstrate the common use cases of the API Management platform and some of the common ingress services available in Kubernetes such as Nginx, Istio, and APIM. APIM blends well with Azure Kubernetes Service (AKS) and there are multiple ways to deploy it with AKS in a Virtual Network (VNet) as follows:

1. APIM as external g/w to AKS
2. APIM as internal g/w to AKS
3. APIM as an ingress service in AKS

We will discuss the use cases for choosing the best deployment strategy for the APIM gateway that suits your needs.

NDC Sydney 2021

November 2021


November 2020

NDC Sydney 2020

October 2020 Sydney, Australia

ServerlessDays ANZ 2020

September 2020

Rahul Rai

Group Product Manager, LogicMonitor | MVP Microsoft Azure

Sydney, Australia