Abstract

Human Activity Recognition has been a favorite topic for the scholars not only because of its wide scale acceptance in the industry but areas which may help in medical and in our normal household works as well. Since to make this technology available to the last person standing in the queue it is important that models compiled and trained in this field are not just high performing but optimized as such with incurs the least overhead. And thus bringing TinyML into the picture which has specialty in the field of optimizing the model w.r.t. the size of the model, energy consumption, network bandwidth usage etc. Thus this work includes using optimizing techniques such as pruning and quantization on the pre-proposed models and analyze the changes it causes in such models w.r.t accuracy and size. Our work is able infer that by using both Pruning and Quantization techniques on a human activity recognition model we can compress a model up to 10 time without hampering severe diversion to the accuracy of the model. We have taken three models and UCI-HAR dataset and compare the outcomes of the experiment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call