Session
Smaller is better: Why Machine Learning on Microcontrollers Matters
In the world of Machine Learning (ML), we tend to think that bigger is better: our datasets, models, and compute stacks can never be large enough. And while bigger may be better for training, when inferencing, using the cloud means your risks around latency and privacy become bigger, as well. All that data you’re sending to the cloud in real-time is slowing you down, and putting you at risk.
Thanks to emerging support for inferencing on microcontrollers (MCUs) in frameworks like TensorFlow, it’s increasingly possible to build ML solutions that leverage the cloud for training, and embedded devices for real-time predictions that are fast and secure. For the Python developer, supporting MCUs requires only a minor adjustment to your training workflow.
In this session, we’ll explore both the why and how of ML on MCUs, from training a model and optimizing for size and performance, to building a microcontroller application for real-time inferencing on an embedded device.

Brandon Satrom
Developer Experience @ Blues Wireless
Austin, Texas, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top