This presentation will show you how to deploy machine learning models to affordable microcontroller-based systems - using the Python that you already know. Combined with sensors, such as microphone, accelerometer or camera, this makes it possible to create devices that can automatically analyze and react to physical phenomena. This enables a wide range of useful and fun applications, and is often referred to as "TinyML".

The presentation will cover key concepts and explain the different steps of the process. We will train the machine learning models using standard scikit-learn and Keras, and then execute them on device using the emlearn library. To run Python code on the microcontroller, MicroPython will be used. We will demonstrate some practical use-cases using different sensors, such as Sound Event Detection (microphone), Image Classification (camera), and Human Activity Recognition (accelerometer).

Jon Nordby

Affiliation: Soundsensing

Jon is a Machine Learning Engineer specialized in IoT systems. He has a Master in Data Science and a Bachelor in Electronics Engineering, and has published several papers on applied Machine Learning, including topics like TinyML, Wireless Sensor Systems and Audio Classification.

These days, Jon is co-founder and Head of Data Science at Soundsensing, a leading provider of condition monitoring solutions for commercial buildings and HVAC systems. He is also the creator and maintainer of emlearn, an open-source inference engine for microcontrollers and embedded systems.

visit the speaker at: GithubHomepage