Speaker

Angel M De Miguel Meana

Angel M De Miguel Meana

Staff Engineer at VMware AI Labs

Angel is a Staff Engineer at VMware AI Labs working on multiple WebAssembly initiatives. His background is as full-stack web developer working primarily with UIs, APIs, automation and Kubernetes. Angel is an Open Source (OSS) enthusiast, both as a creator and contributor to different projects. In his spare time you will be find him playing with new frameworks and languages any time he has a chance.

Wasm Workers Server: Portable serverless apps with WebAssembly

The worker model, pioneered by Cloudflare and others, provides a lightweight, secure way of running serverless workloads.

This talk introduces Wasm Workers Server, an open source serverless implementation of this model. It leverages the portability, performance and security features from WebAssembly (Wasm). We will show why and how we designed Wasm Workers Server (wws) to maximize compatibility with existing solutions.

Maintaining a balance between delivered features and compatibility is complex. Every platform wants to provide the latest features to unlock new capabilities. Supporting multiple languages is a big one, as developers' productivity relies heavily on reusing their knowledge.

We designed wws based on two principles: fitting existing developers' workflows and compatibility with other platforms. This framework enables you to develop and run serverless applications written in different programming languages from Rust and Go to JavaScript and Python.

Runwasi: WebAssembly serverless for containerd

The Worker model, pioneered by CloudFlare and others, provides a lightweight, secure way of running serverless workloads. WebAssembly (Wasm) is a portable binary format that allows code from a variety of languages like Rust, JavaScript, and Python.

This talk introduces runwasi, an open-source library to develop containerd shims that leverage WebAssembly and the WebAssembly System Interface (WASI). Together, Wasm and Runwasi enable deployment of secure, lightweight apps whenever you can run containers.

Thanks to the sandboxed execution environment, the workloads get an extra isolation layer. The capability-based security model that WASI follows ensures the functions only have access to the required resources. Application distribution is another challenge. Wasm modules are small and compact (20MB for a Python Wasm module).

This talk will provide some of the challenges involved and a practical demonstration of what it looks like to run a Wasm serverless app on top of Kubernetes.

Getting started with AI and WebAssembly

AI/ML (Artificial Intelligence / Machine Learning ) and related technologies are everywhere nowadays. After the release of popular products like ChatGPT, both users and developers realized the new possibilities that this technology opens.

In this talk, we will cover WASI-nn, which is a technology that sits at the crossroads of AI/ML and WebAssembly. WASI-nn is a standard proposal for performing ML inference. It allows WebAssembly modules to call the low-level bits required for running the inference. Wasi-nn abstracts the module from the underlying system, allowing the host to use any available hardware. In other words, you can use the same module to run inference in a server, a browser or in an IoT edge device.

We will explain the basic concepts and showcase multiple demos using different runtimes such as Wasmtime and WasmEdge, as well as more complete applications like Wasm Workers Server. After this talk you will be ready to build and deploy your first AI/ML application using Wasm.

Develop Wasm applications with Docker

Developers love containers and use them in a variety of situations and setups. Among other innovations, Docker (the tool) introduced a great developer experience for building, publishing, and running containers. It suddenly became easier to run complex applications in heterogeneous environments, including your laptop!

WebAssembly (Wasm) is a compact binary format for compiling code to a portable target. This compiled binary, called "module", can run in any place that includes a Wasm runtime. By design, these modules run in a sandboxed environment. They only have access to its memory and the resources granted by the runtime. In some ways, Wasm and containers solve similar problems.

This talk will explore how containers and WebAssembly can work together to unlock the potential of both technologies. You will learn how to mix and match containers and Wasm modules and the benefits of doing so. You will learn how to run your first projects using your favorite language with WebAssembly.

Develop serverless apps with Wasm Workers Server

In this talk, we will explore how WebAssembly brings new benefits to the Serverless architecture and how to develop and run applications following this model with the Wasm Workers Server Open-Source project. You will learn how to create your first serverless application as Wasm modules! 

This architecture is not new. It allows you to focus on the business logic of your applications. You develop the functions that process users’ requests and return a result. These independent units compose services. Then, frameworks and providers connect these pieces so users can access them. 

WebAssembly is a natural fit for this model. The small binary format simplifies the distribution process. The ability to compile multiple languages into Wasm opens the possibility to mix and match them in the same project. The sandboxing and capability-based model ensures the functions are isolated without the cold start of other solutions like VMs and containers. 

Wasm Workers Server is an Open-Source project to develop applications following this model. It simplifies how to create and run serverless applications in different languages. All the functions, or workers, run inside the WebAssembly sandbox. 

After this talk, you will understand the benefits that WebAssembly brings to the serverless ecosystem. You will learn how to program your first application using Wasm Workers Server, by creating different functions in multiple languages. 

Angel M De Miguel Meana

Staff Engineer at VMware AI Labs