© Mapbox, © OpenStreetMap

Most Active Speaker

Shivay Lamba

Shivay Lamba

Developer Relations

New Delhi, India

Actions

Shivay Lamba is a software developer specializing in DevOps, Machine Learning and Full Stack Development.

He is an Open Source Enthusiast and has been part of various programs like Google Code In and Google Summer of Code as a Mentor and is currently a MLH Fellow. He has also worked at organizations like Amazon, EY, Genpact. He is a Tensorflow.JS SIG member and community lead from India.

Awards

  • Most Active Speaker 2023

Area of Expertise

  • Information & Communications Technology

Topics

  • JavaScript
  • Web Development
  • Machine Learning & AI
  • devops
  • tensorflow.js
  • Android
  • Android Software Development
  • Android Development
  • Developing Android Apps
  • Android Architecture
  • Android Tools
  • Android & iOS Application Engineering
  • ● Firebase ● Android ● Android Things / IOT ● Progressive Web App ● Machine learning and AI ● Robotics and Drone Technologies ● Tensorlow
  • Android Design
  • IOT and Android Things
  • JavaScript & TypeScript
  • Machine Learning
  • Artificial Intelligence and Machine Learning for Cybersecurity
  • Azure Machine Learning
  • Rust
  • Kubernetes
  • Zero-Trust Security
  • Docker
  • python
  • micropython
  • python3
  • Python on Azure
  • python programming for beginners
  • DigitalOcean Kubernetes Service
  • Kubernetes Security
  • zero trust
  • Azure Machine Learning Studio

Achieving End-to-End Compliance in Prod: How SBOM Attestation Transforms Vulnerability Management

In the ever-evolving landscape of microservices, achieving end-to-end compliance is a paramount concern. This session explores the transformative power of Software Bill of Materials in providing a comprehensive and efficient approach. SBOM attestations emerge as a cornerstone, simplifying the intricate processes of vulnerability management, ensuring a robust SDLC..
As the scale of our organization increases, Fixing CVEs and keeping track of all the dependencies becomes crucial to becoming compliant fast, as we release more and more software, alongside shipping new features for our customers.
This session intends to demonstrate how to orchestrate pipelines integrating tools such as Syft, to make our software releases compliant with regulations alongside maintaining attestation.
Join us to learn more about the multifaceted benefits of SBOM adoption, where attestation becomes a game-changer, vulnerabilities are managed, and compliance is achieved in software release management.

The Kubernetes Hunger Games: Distro Performance in the Edge

Join us at Kubernetes & Edge Day to delve into the world of Kubernetes distros like k8s, k3s, KubeEdge and the newly introduced distro k2d. This talk offers an in-depth look at deploying and running Kubernetes in edge environments. We'll explore use cases, best practices, and how different distros meet diverse edge computing needs. Discover how k3s manages IoT devices and KubeEdge powers AI at the edge. The session will navigate the ecosystem, emphasizing the strengths of each Kubernetes distro variant. Gain insights into the strategic choice of distros for specific scenarios.

This session is ideal for those aiming to enhance their understanding of Kubernetes in edge computing, and understanding which distro works better for specific use-case.

Server-side rendering using WebAssembly

Abstract:

Constantly thinking about how to best achieve Server-side rendering (SSR) for your application? SSR enables to render client side SPAs on servers and sending a fully rendered page back to the client.
Is there a way to achieve server side rendering that is also secure and takes up less space? Enter WebAssembly!

This talk shares how to achieve Server-side rendering using WebAssembly and WASMEdge which is a WebAssembly Runtime. The talk also covers the benefits of using WebAssembly to achieve Server Side Rendering. The talk will also cover a demo of how to launch a React application using the WasmEdge runtime.

Table of Content:
1. Quick introduction to WebAssembly
2. What is WASMEdge
3. How can WebAssembly and WASMEdge be used to achieve Server Side Rendering
4. Benefits of using WebAssembly to achieve Server Side Rendering
5. Showcase demonstration on how to launch a React Application using WasmEdge, and then showcase static rendering using this technique

Securing service meshes with eBPF

eBPF has several use-cases. For instance, it can be used by anyone trying to add traffic control, create network policy, add observability, routing traffic to a service mesh control plane, or for load balancing. Securing your applications with a defense in depth architecture and gaining visibility in your application behavior are the two key requirements to be successful in any modern cloud native deployment. While service meshes like Istio provide these capabilities via a user space proxy mechanism it's not always feasible to inject sidecars proxies for all your applications. On the other hand Kernel technologies like eBPF when used in a CNI like Cilium provides security and metrics transparently but lacks the richness of information and policy capabilities provided by a layer 7 proxy with strong identities.
In this session, We will present how we can leverage capabilities provided by both these technologies and achieve better security and observability ensuring all your applications can have uniform policy and visibility irrespective of whether they are in the mesh or not or if they are running as a container in Kubernetes or long running VM where making privileged changes are often not possible.

Next gen machine learning web apps with WASM

Have you ever questioned whether Javascript is a viable alternative to Python or R for creating machine learning models? After all, a 2019 survey by Stack Overflow found that Javascript is the language developers use the most. This approach seems ridiculous because neural networks require a lot of computational power and javascript was not intended for high-speed computing. But hold on, javascript libraries like Onnx.js and Tensorflow.js are here to save the day! Using WebAssembly as a supported backend for machine learning inference and training, these libraries with the help of WASM allow you to do all the ML heavy lifting right in the browser I'll be going into further detail on how to create machine-learning applications and use WASMwith JavaScript in this talk.

Machine Learning in Web using TensorFlowJS

This workshop gives an introduction about Tensorflow.JS which in an open source Javascript Library that allows running machine learning models on the browser itself and helps integrate the models with web applications. Tensorflow.JS gives creators working with Web Development a powerful tool to use with their web apps to create dynamic web apps using machine learning. It gives these creative professionals a lot of dynamic tools and utilize Machine learning in a really easy way to create powerful and intuitive applications without having much / no knowledge of machine learning before hand.
Conventional methods of uploading machine learning models for web applications can be a daunting task for web developers who specialize in Javascript. Learning Python, deployment of models, cross referencing machine learning models made in Python using APIs in Nodejs backend environment are some of the additional requirements for a web developer with little or no experience in Python or Machine Learning. The developers might require powerful CPUs/GPUs to be used for training of the models. This is where Tensorflow.JS ( TFJS ) comes into the picture. It allows standard machine learning libraries and models to be used directly with Javascript. It runs the models on the Browser ( client side ), or on the backend with Nodejs. And it makes it really easy for the Javascript developers to integrate machine learning models without much knowledge behind how these models work.

Structure / Table of Contents :
1. Introduction to Tensorflow.JS
2. Benefits of running Tensorflow.js with Nodejs on backend and on Frontend/Client Side
3. Workshop to build a project using TensorFlowjs during live demonstration

Embracing Multi-tenancy while Scaling MLOps

As the adoption of MLOps for training, deploying large ML models grows, the need for multitenancy in MLOps systems also increases. As organizations scale their ML operations, the ability to share resources efficiently, maintain isolation, and ensure security across multiple teams becomes paramount. Thus the talk covers some of the fundamental needs and challenges in adopting multi-tenancy in MLOps.

The talk covers how features such as isolation of ML workflows, resource quotas, role-based access control, data isolation, and a shared artifact repository contribute to a secure and efficient multi-tenant environment and helps overcome some of the associated challenges in running MLOps efficiently. Achieving the above is hard if done independently.

The talk demonstrates how to implement the above with ease using open-source tools like Flyte efficiently and cost-effectively, enabling different teams within an organization to operate in isolation while sharing resources efficiently.

Building Next Generation Web-based ML Applications Using TensorFlow.js

Join Shivay Lamba, as he covers TensorFlow.js (an open-source machine learning library written in JavaScript) and will cover the state of Machine Learning in JavaScript, in 2022. Inspired by real-world business applications, in use today, Jason will share the APIs and libraries that practitioners can use to start building advanced web applications. This talk is ideal for those practitioners who wish to bring their next startup idea to life, or for those looking to use AI to create a competitive differentiator.

With the growing popularity of TensorFlow.js, which has seen 6x growth in developer usage since 2020, now is the time to take your first steps. This is also the time for key decision-makers to understand what is possible in JavaScript - it may surprise you! Did you know Node.js can run up to 2x faster than Python for end-to-end ML inference tasks? This demonstration-packed talk aims to educate, inspire, and enable you to rapidly prototype your next idea faster than ever. This talk is intended to be suitable for everyone, no matter what your background in AI or Web Technologies. Everybody is welcome!

Building Machine Learning Microservices & MLOps using UnionML

The difficulty of transitioning from research to production is a prevalent issue in the machine learning development life cycle. An ML team may need to modularize and rework their code to work more effectively in production. Occasionally, depending on whether the application requires offline, online, or streaming predictions, this can necessitate re-implementing and maintaining feature engineering or model prediction logic in several locations.

The audience will learn about an open-source microframework for creating machine learning applications in this session. UnionML, developed by the Flyte team, offers a straightforward, user-friendly interface for specifying the fundamental components of your machine learning application, from dataset curation and sampling to model training and prediction. UnionML automatically generates the procedures required to fine-tune your models and release them to production in various prediction use cases, such as offline, online, or streaming settings using these building blocks. There will be a live demonstration by taking an end-to-end machine learning-based example written in Python.

We can look to the web for ideas while we consider a solution to this issue. For instance, the HTTP protocol, which provides a backbone of techniques with clearly defined but flexible interfaces, standardizes the way we move data across the internet. We were interested in posing the question, "What if we could develop, automate, and monitor data and ML pipelines at scale?" as machine learning systems proliferate across industries. https://github.com/unionai-oss/unionml

Bringing Generative Art and LLMs to the Edge

This talk will focus on the challenges of running large models on different system architectures and the need to optimize the models themselves to run efficiently on the edge. We will discuss how to optimize models to reduce the model size and computational complexity, and how to use different techniques to improve the inference time. Additionally, we will also explore the implications of running such models on edge devices, including issues such as memory and bandwidth constraints, and discuss how to optimize these models to achieve the best performance across different system architectures. We not only show how all of the aforementioned tasks could be done with Kubernetes on the edge to deploy your ML models in an optimal way but also show how one can make the best use of edge hardware accelerators like GPUs or TPUs and show use of technologies like WebAssembly to support model deployments on wide range of edge architectures.

Bridging the Gap: Achieving Security and Observability Harmony with User and Kernel Synergy

Securing your applications with a defense in depth architecture and gaining visibility in your application behavior are the two essential requirements to be successful in any modern cloud native deployment. While Istio provide these capabilities via a user-space proxy mechanism, injecting sidecars proxies for all your applications is only sometimes feasible. On the other hand, Kernel technologies like eBPF, when used in a CNI like Cilium, provide security and metrics transparently but need more richness of information and policy capabilities provided by a layer 7 proxy with solid identities.
In this session, I will present how we can leverage capabilities provided by both these technologies and achieve better security and observability, ensuring all your applications can have a uniform policy and visibility irrespective of whether they are in the mesh or not or if they are running as a container in K8s or long-running VM where making privileged changes are often not possible.

Automating MLOps: Event-Driven Machine Learning Workflows with Argo Events

Are manual, time-consuming WorkFlows getting you down? Are you trying to find a solution to automate your workflows so they can be more dependable and efficient? If so, you should attend this session!

Participants in this engaging talk will discover how to use ArgoEvents and Flyte to harness the potential of event-driven workflows. You can automate MLOps in reaction to outside events by using ArgoEvents to start Flyte Machine Learning operations.

How easy it is to set up ArgoEvents and integrate it to Flyte workflows will astound you. By using ArgoEvents to start Flyte processes, you can easily automate Machine Learning Workflows to make a seamless and smooth MLOps Experience. Our practical example will show you how to do this.

Participants will leave this session with a basic understanding of how to automate their machine-learning workflows using ArgoEvents and Flyte. Activate the potential of event-driven workflows now!

Accelerating Search in Rails with Meilisearch: A Rust-Based, Real-Time Search Engine

In the age of overabundance of information, search has emerged as a crucial component of contemporary web apps. Nevertheless, when it comes to real-time, blazing-fast search results, conventional search engines frequently fall short. Introducing Meilisearch, a Rust-based open-source search engine. Even on datasets with millions of records, Meilisearch is built to offer search results in milliseconds.

By the end of this talk, you'll understand how Meilisearch can accelerate search in Rails applications and enable real-time, lightning-fast search results.

https://github.com/meilisearch/meilisearch-rails

Achieving Efficient Multi-Tenancy in Kubernetes with KubeSlice

As Kubernetes becomes the go-to for container orchestration, managing multiple clusters can be complex and resource-intensive. Enter multi-tenancy, a solution for deploying multiple workloads in a shared cluster while maintaining isolation of network traffic, resources, and user access while also discussing the benefits of using multi-tenancy in K8s such as cost savings, reduced operational overhead, and improved continuous delivery at scale.

We'll introduce 'KubeSlice', a solution that creates tenancy in a K8s cluster and extends it to multi-cluster. We'll guide you through establishing tenancy using KubeSlice, demonstrating how it creates "slices" that allow pods and services to communicate seamlessly across clusters, clouds, edges, and data centers.

By the end of this talk, attendees will understand the benefits of multi-tenancy in K8s and how to implement it using KubeSlice, empowering them to optimize their Kubernetes operations for more efficient and cost-effective workflows.

A tale of Optimizing Matrix Multiplication in Wasm & Rust

A tale of matrix multiplication for machine learning using WASM" is a talk that will explore the use of WebAssembly (WASM) to improve matrix multiplication for machine learning. The talk will focus on how matrix multiplication is done in machine learning, and how WASM can be used to optimize this process through the use of various tricks and techniques. The talk will include a demonstration of how to use Rust and TensorFlow to perform matrix multiplication in WASM and compare its performance to traditional methods.
The talk will begin by providing an overview of matrix multiplication in machine learning and its importance. The speaker will then delve into the technical details of how WASM can be used to optimize matrix multiplication through the use of various tricks and techniques. They will also provide a demonstration of how to use Rust and TensorFlow to perform matrix multiplication in WASM and compare its performance to traditional methods.

​​Fine-Tuning Large Language Models with Declarative ML Orchestration

Large Language Models used in tools like ChatGPT are everywhere; however, only a few organisations with massive computing resources are capable of training such large models. While eager to fine-tune these models for specific applications, the broader ML community often grapples with significant infrastructure challenges.

In the session, the audience will understand how open-source ML tooling like Flyte (a Linux Foundation open-source orchestration platform) can be used to provide a declarative specification for the infrastructure required for a wide array of ML workloads, including the fine-tuning of LLMs, even with limited resources. Thus the attendee will learn how to leverage open-source ML toolings like Flyte's capabilities to streamline their ML workflows, overcome infrastructure constraints, reduce cost and unlock the full potential of LLMs in their specific use case. Thus making it easier for a larger audience to leverage and train LLMs.

WebAssembly based AI as a Service with Kubernetes

WebAssembly (WASM) is being adopted in cloud-native applications, there are increasing demands to support scripting-language applications and libraries in WASM. That allows WASM runtimes, such as WasmEdge (a lightweight and high-performance runtime for cloud-native, edge, and decentralized devices), to run serverless functions written in scripting languages and APIs. Following the large-scale adoption and benefits of serverless computing, we focus on deploying these as a Function-as-a-service

Machine Learning inference is often a computationally intensive task and edge applications could greatly benefit from the speed of WebAssembly. Unfortunately, Linux containers end up being too heavy for such tasks. Demonstrating Machine Learning deployments in such a fashion, another problem we face is that the standard WebAssembly provides very limited access to the native OS and hardware, such as multi-core CPUs, GPUs, or TPUs which is not ideal for the systems we target. The talk also shows how one could use the WebAssembly System Interface (WASI) to get security, portability, and native speed for ML models.

To top it off this talk ends with a demo of deploying a Machine learning model as a serverless function using WASM.

TensorFlow.js Model Updates

TensorFlow.js has seen huge growth in the past year. A lot of existing models have gotten huge performance updates and there have also been a few new models released ranging from pose estimation to models for Natural Language processing.

This lightning talk will go through all the updates around the TensorFlow.js models.

Tensorflow.js Tasks API

Google I/O introduced the TFJS Task API which provides an unified experience for running task-specific models on the Web. It helps integrate models from different runtimes like TFLite, MediaPipe to be served in TensorFlow.js.
This Lightning talk will go through the TFJS Task API and it's implementation.

Make a smart camera using a pre-trained TensorFlow.js Machine Learning model

Learnt benefits of using TensorFlow.js and pre-made ML models

Created a fully working web page to classify common objects in real-time using your webcam by:

1. Creating an HTML skeleton for content
2. Defining styles of the HTML content
3. Detecting and enabling a webcam stream
4. Loading a pre-trained TensorFlow.js model
5. Using the loaded model to make continuous classifications
6. Drawing a bounding box around objects in the image.

CNCF-hosted Co-located Events North America 2023 Sessionize Event

November 2023 Chicago, Illinois, United States

API World 2023 Sessionize Event

October 2023 Santa Clara, California, United States

droidcon Lisbon 2023 Sessionize Event

September 2023 Lisbon, Portugal

React Rally 2023 Sessionize Event

August 2023 Salt Lake City, Utah, United States

Chain React 2023 Sessionize Event

May 2023 Portland, Oregon, United States

KubeHuddle Toronto 2023 Sessionize Event

May 2023 Toronto, Canada

Civo Navigate Sessionize Event

February 2023 Tampa, Florida, United States

droidcon Kenya 2022 Sessionize Event

November 2022 Nairobi, Kenya

DataFestAfrica Sessionize Event

October 2022 Lagos, Nigeria

Kubernetes Community Days Africa 2022 - Lagos Sessionize Event

July 2022 Lagos, Nigeria

Google IO Extended Universal Sessionize Event

July 2021

Shivay Lamba

Developer Relations

New Delhi, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top