Session

TinyML at the Edge: Deploying and Optimizing AI Workloads on Zephyr RTOS

TinyML is transforming edge computing by enabling smart inference directly on microcontrollers but resource limitations make deployment complex. Zephyr RTOS lightweight, modular, and feature-rich is becoming a go-to platform for building embedded AI systems. This session walks through how to effectively run TinyML workloads on Zephyr using various inference engines like TensorFlow Lite Micro, microTVM, emlearn, and LiteRT, along with decision points for selecting runtimes based on hardware constraints. We will explore the runtime and how it simplifies AutoML workflows while supporting multiple backends. Attendees will also learn to use Zephyr’s Linkable Loadable Extensions (LLEXT) for hot-swapping models without reflashing. Performance optimization techniques such as quantization and operator fusion will be covered, along with benchmarking on physical devices vs Renode simulation. The talk concludes with real-world examples like health monitors and predictive maintenance, best practices for OTA model updates, and the future of embedded AI with Zephyr.

Amandeep Singh

Founder & CEO Welzin.ai

Chandigarh, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top