Speaker

Rahul Rai

Rahul Rai

Group Product Manager, LogicMonitor | MVP Microsoft Azure

Sydney, Australia

Rahul Rai is a passionate technology enthusiast with a strong interest in programming. He is a Microsoft MVP - Azure and works as Group Product Manager at LogicMonitor. He possesses extensive experience in cloud and web technologies, as well as in leading engineering teams and building impactful applications to address critical business needs.

Outside of his professional life, Rahul is committed to sharing his knowledge and insights with others. He writes books, conducts free workshops, and maintains his blog at https://thecloudblog.net. With a knack for simplifying complex topics, he aims to make technology accessible and understandable for individuals at all levels of expertise.

Awards

Area of Expertise

  • Information & Communications Technology

Topics

  • Azure
  • Kubernetes
  • ServiceMesh
  • Istio

The Open Service Mesh Crash Course

Organizations all over the world are in love with microservices. Teams that adopt microservices have the flexibility to choose their tools and languages, and they can iterate designs and scale quickly. However, as the number of services in the organizations continues to grow, they face challenges that can be broadly classified into two categories:

• Orchestrate the infrastructure on which the microservices are deployed.
• Consistently implement the best practices of service-to-service communication across microservices.

Kubernetes and service meshes are the solutions to those challenges. While Kubernetes is an excellent microservices lifecycle management platform, a service mesh can address the networking concerns of microservices deployed on a Kubernetes cluster. Open Service Mesh (OSM) is a lightweight and extensible cloud-native service mesh. OSM takes a simple approach for users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments. Using the CNCF Envoy project, OSM implements Service Mesh Interface (SMI) for securing and managing your microservice applications.

With the Open Service Mesh, you can:

· Easily and transparently configure traffic shifting for deployments
· Secure end-to-end service to service communication by enabling mTLS
· Define and execute fine-grained access control policies for services
· Observability and insights into application metrics for debugging and monitoring services
· Integrate with external certificate management services/solutions with a pluggable interface

In this session, we will discuss the fundamentals of OSM, its use cases, and configurations and learn how OSM can take care of almost all of the service management issues for new and existing applications. In this demo-packed session, we will install OSM on an Azure Kubernetes Service (AKS) cluster and implement common microservices networking patterns on a set of microservices.

**Value to Attendees**

This session will help attendees get familiar with OSM concepts and patterns. Then, through a series of demos, they will learn to apply the patterns to enterprise scenarios. The following are the key takeaways from the session:

· Understand the value proposition of Service meshes
· Introduction to OSM and SMI
· Manage inter-microservice communication
· Manage the security of microservices
· Make microservices observable
· Implement common microservice networking patterns

Cloud-Native Kubernetes Workflows on AKS with Argo

Introduction
Kubernetes is the most popular container orchestrator available today. Building workflows is one of the key requirements that we, as developers, fulfill for the businesses. As businesses are rapidly turning to Kubernetes to host their workloads, the Kubernetes community has produced a Kubernetes native service named Argo that addresses the need for hosting workflows on Kubernetes.

Argo is a Cloud-Native Cloud Foundation (CNCF) project developed to solve many common business problems with ease when dealing with Kubernetes based deployment environment. It has four sub-components, i.e., Argo workflows, Argo CD, Argo Events, and Argo Rollouts. Each of the individual components can be used independently of each other, which makes Argo very flexible.

Some of the main advantages of Argo Workflows are:
• Natively designed for Containers
• Can run on any Kubernetes environment.
• Schedule workflows with just YAML configurations.
• Argo Workflows puts a cloud-scale supercomputer at your fingertips!

In this session, we will cover an introduction to Argo and its main components. We will also cover some examples of Argo workflows with several demos.

Value to Attendees

The following are the key takeaways from the session.

1. Learn to build Argo workflows in Kubernetes.
2. Remove custom tweaks in source code and Kubernetes setup and configure workflow entirely using YAML configurations.
3. Configure blue/green deployments, A/B testing using a lightweight component.
4. Configure event-based workflows using internal events or external services using webhooks.

Deployment Certifications at Scale on AKS with Azure Policy and Azure Functions

Do you know that your Kubernetes cluster can talk to you? And that too, with voice or SMS messages?

Controlling resource deployments on an Azure Kubernetes Service (AKS) cluster can quickly become challenging, particularly when multiple Continuous Delivery pipelines target the same cluster. In such scenarios, you want to build smarts within the Kubernetes cluster to admit or deny pods that do not meet your admission criteria. With Azure Policy for AKS, you can write static admission rules for your Kubernetes cluster. But what if your admission demands are dynamic or you want to roll out custom admission policies? By creating custom admission webhooks for Kubernetes, you can define custom policies that regulate the deployment of resources to a cluster.

 

In this session, we will present how you can administer Azure Policies for AKS and subsequently build a serverless validating admission webhook with Azure Functions to apply governance policies on the deployments in Kubernetes. Azure Functions allow you to integrate with external services without writing a single line of integration code. We will use the Azure Function’s native Twilio binding to send SMS updates on Kubernetes deployments to the Ops/SRE team. After participating in this session, you will understand how easy it is to write custom validating webhooks for Kubernetes. Also, you will learn to build and deploy a serverless infrastructure to certify deployments at scale.

Fast Track Istio

Overview

The Fast Track Istio workshop will get developers up and running with Istio on a live Kubernetes cluster. Let us begin with understanding why enterprises need Service Meshes in the first place. Organizations all over the world are in love with microservices. Teams that adopt microservices have the flexibility to choose their tools and languages, and they can iterate designs and scale quickly. However, as the number of services in the organizations continue to grow, they face challenges that can be broadly classified into two categories:

• Orchestrate the infrastructure on which the microservices are deployed.
• Consistently implement the best practices of service-to-
service communication across microservices.

By adopting container orchestration solutions such as Docker Swarm, Kubernetes, and Marathon, developers gain the ability to delegate infrastructure-centric concerns to the hosting platform. With capabilities such as cluster management, scheduling, service discovery, application state maintenance, and host monitoring, the container orchestration platforms specialize in servicing layers 1–4 of the Open Systems Interconnection (OSI) network stack.

Almost all popular container orchestrators also provide some application life-cycle management (ALM) capabilities at layers 5–7, such as application deployment, application health monitoring, and secret management. However, often these capabilities are not enough to meet all the application-level concerns, such as rate-limiting and authentication.

Istio is an open-source service mesh that automatically adds the network capabilities that Microservices need without requiring developers to make any changes to the source code. Istio simplifies service to service communication, traffic ramping, fault tolerance, performance monitoring, tracing and much more.
In this workshop, participants will learn the fundamentals of Istio, its use cases, configurations, and learn how Istio can take care of almost all of the service management issues for new as well as existing applications by writing and applying configurations on the services. The workshop will cover the hands-on experience of building, deploying and managing applications with Istio on Kubernetes.

Format of the Workshop

We will use a set of simple Microservices applications that resemble real-world scenarios to explore the various nuances of Istio by deploying them to a live Kubernetes cluster. By working through the samples and exercises, the readers will get a thorough understanding of the features of Istio and its advantages. The workshop will be delivered in an easy to digest format over two days. On the first day, we will gain an understanding of the platform and its features, and deploy simple applications to Istio to understand the capabilities of the network APIs. On the second day, we will add traffic management, security policies, and monitoring capabilities to the sample applications.

Value to Developers

This workshop will help developers get familiar with the concepts of Istio and apply them to real-world scenarios. After completing the workshop, the participants will gain experience with the following:

1. The value proposition of Service meshes.
2. Manage inter-microservice communication.
3. Manage the security of microservices through the platform.
4. Configure observability of microservices.
5. Implement common microservice networking patterns.

Workshop Outline

Over two days, participants will build and experience the features of the Istio service mesh.

Day 1: Introduction to Istio
1. Service mesh
2. Use cases
3. Advantages of using Istio as a service mesh
4. Istio architecture
5. Istio components

Day 1: Hands-on Istio
1. Istio deployment on K8s
2. Istio deployment configurations
3. Istioctl client

Day 1: Deploying Application with Istio on Kubernetes Cluster
1. Configuration using Helm
2. Configuration using kubectl
3. Deploying Application

Day 2: Observability and Traffic Management
1. Networking API 1: Egress and ingress gateway
2. Networking API 2: Service entry, destination rule, virtual service
3. Monitoring on Istio: metrics, traces, logs

Day 2: Security
1. Authentication: mTLS, transport authentication, origin authentication
2. Authorization policy
3. Securing ingress

Day 2: Patterns
1. Canary deployments
2. A/B testing with Iter8
3. Implementing microservices patterns: timeouts, retry, circuit breakers, fault injection

Target Audience
• Engineers and DevOps professionals.
• Hands-on Engineering leaders.

System Requirements
• Laptop with network access
• Software installed (see below)

Computer setup
• Windows 10 Pro with Hyper-V and Docker for Windows (with Kubernetes cluster setup)
• Macs with Docker for Mac installed (with Kubernetes cluster setup)
• Visual Studio Code

Supplementary Links
• Docker for Windows: https://docs.docker.com/docker-for-windows/
• Docker For Mac: https://docs.docker.com/docker-for-mac/
• Istio: https://istio.io/

Using Azure Function for Dynamic Admission Control in Kubernetes

Kubernetes version 1.9 introduced two code packages that allow you to write custom admission plugins: ValidatingAdmissionWebhook and MutatingAdmissionWebhook. These plugins give you a great deal of flexibility to integrate directly into the resource admission process.

In this session, we will write some validating admission webhooks with Azure Functions and use them to apply governance policies on the deployments in Kubernetes. After completing this session, you will understand how easy it is to write custom validating webhooks for Kubernetes. You will be ready to automate your existing organizational deployment policies and certify deployments at scale in Kubernetes.

The Deployment Architecture of an Enterprise API Management Platform on AKS

API Management is increasingly accepted as an essential part of any enterprise API program. It is also a key enabler of digital strategies. In a microservices architecture, you need a central hub for users to interact with services rather than having them face the complexity of individual services. A mature API management platform like Azure API Management (APIM) service consists of a set of tools that help you aggregate APIs and provides several other functions such as caching, request and response transformation, bundling responses, and versioning.

In this fast-paced session, we will discuss and demonstrate the common use cases of the API Management platform and some of the common ingress services available in Kubernetes such as Nginx, Istio, and APIM. APIM blends well with Azure Kubernetes Service (AKS) and there are multiple ways to deploy it with AKS in a Virtual Network (VNet) as follows:

1. APIM as external g/w to AKS
2. APIM as internal g/w to AKS
3. APIM as an ingress service in AKS

We will discuss the use cases for choosing the best deployment strategy for the APIM gateway that suits your needs.

NDC Sydney 2021 Sessionize Event

November 2021

Azure Community Conference 2021 Sessionize Event

October 2021

AzConf Sessionize Event

November 2020

NDC Sydney 2020 Sessionize Event

October 2020 Sydney, Australia

ServerlessDays ANZ 2020 Sessionize Event

September 2020

NDC Melbourne 2020 - Online Workshop Event Sessionize Event

July 2020

Rahul Rai

Group Product Manager, LogicMonitor | MVP Microsoft Azure

Sydney, Australia

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top