The number of connected embedded edge computing Internet of Things (IoT) devices has been increasing over the years, contributing to the significant growth of available data in different scenarios. Thereby, machine learning algorithms arise to enable task automation and process optimization based on those data. However, due to some learning methods' computational complexity implementing geometric classifiers, it is a challenge to map these on embedded systems or devices with limited resources in size, processing, memory, and power, to accomplish the desired requirements. This hampers the applicability of these methods to complex industrial embedded edge applications. This work evaluates strategies to reduce classifiers' implementation costs based on the CHIP-clas model, independent of hyperparameter tuning and optimization algorithms. The proposal aims to evaluate the tradeoff between numerical precision and model performance and analyze the hardware implementations of a distance-based classifier. Two 16 -b floating-point formats were compared to the 32 -b floating-point precision implementation. Also, a new hardware architecture was developed and then compared to the state-of-the-art reference. The results indicate that the model is robust to low precision computation, providing statistically equivalent results compared to the baseline model, also pointing out statistically equivalent performance and a global speed-up factor of approx 4.39 in processing time.
Read full abstract