Session

WebAssembly based AI as a Service with Kubernetes

WebAssembly (WASM) is being adopted in cloud-native applications, there are increasing demands to support scripting-language applications and libraries in WASM. That allows WASM runtimes, such as WasmEdge (a lightweight and high-performance runtime for cloud-native, edge, and decentralized devices), to run serverless functions written in scripting languages and APIs. Following the large-scale adoption and benefits of serverless computing, we focus on deploying these as a Function-as-a-service

Machine Learning inference is often a computationally intensive task and edge applications could greatly benefit from the speed of WebAssembly. Unfortunately, Linux containers end up being too heavy for such tasks. Demonstrating Machine Learning deployments in such a fashion, another problem we face is that the standard WebAssembly provides very limited access to the native OS and hardware, such as multi-core CPUs, GPUs, or TPUs which is not ideal for the systems we target. The talk also shows how one could use the WebAssembly System Interface (WASI) to get security, portability, and native speed for ML models.

To top it off this talk ends with a demo of deploying a Machine learning model as a serverless function using WASM.

Shivay Lamba

Developer Relations

New Delhi, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top