Accelerating Cascade Classifier Training with Genetic Algorithms for Edge ML Applications

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Object detection is a crucial task in computer vision with applications spanning from face recognition to autonomous driving. While today’s CNN-based methods have shown great success for this aim, their relatively large model size and computational complexity limit their use for edge devices such as micro-controllers. On the other hand, the Viola-Jones algorithm has long been a cornerstone in this field, offering robustness and accuracy. This algorithm can be more compact than even compressed CNN models. However, as datasets and feature spaces grow, the computational demands of training an Adaboost classifier can become prohibitively high. This can be a bottleneck where training of the model needs to also be performed at the edge for cost and privacy reasons. In this paper, we present an innovative approach to address this challenge by incorporating Genetic Algorithms (GA) and LightGBM into the Adaboost framework for efficient feature selection, reducing training time by a factor of $50 \times$ without sacrificing accuracy. Additionally, our model exhibits a significantly lower memory footprint, with a size of 20 kB compared to a compressed CNN-based YOLOX architecture with a model size of 314 kB. This makes our approach particularly suitable for detection in edge devices and TinyML community.

Save Icon
Up Arrow
Open/Close
Notes

Save Important notes in documents

Highlight text to save as a note, or write notes directly

You can also access these Documents in Paperpal, our AI writing tool

Powered by our AI Writing Assistant