Abstract

We use 250 billion microcontrollers daily in electronic devices that are capable of running machine learning models inside them. Unfortunately, most of these microcontrollers are highly constrained in terms of computational resources, such as memory usage or clock speed. These are exactly the same resources that play a key role in teaching and running a machine learning model with a basic computer. However, in a microcontroller environment, constrained resources make a critical difference. Therefore, a new paradigm known as tiny machine learning had to be created to meet the constrained requirements of the embedded devices. In this review, we discuss the resource optimization challenges of tiny machine learning and different methods, such as quantization, pruning, and clustering, that can be used to overcome these resource difficulties. Furthermore, we summarize the present state of tiny machine learning frameworks, libraries, development environments, and tools. The benchmarking of tiny machine learning devices is another thing to be concerned about; these same constraints of the microcontrollers and diversity of hardware and software turn to benchmark challenges that must be resolved before it is possible to measure performance differences reliably between embedded devices. We also discuss emerging techniques and approaches to boost and expand the tiny machine learning process and improve data privacy and security. In the end, we form a conclusion about tiny machine learning and its future development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call