Abstract

This paper presents a domain-based transfer learning method for deep learning-based object detection models where the method enables real-time computation in resource-constrained edge devices. Object detection is an essential task for intelligent platforms (e.g., drones, robots, and autonomous vehicles). However, edge devices could not afford to run huge object detection models due to insufficient resources. Although a compressed deep learning model increases inference speed, the accuracy of the model could be significantly deteriorate. In this paper, we propose an accurate object detection method while achieving real-time computation on edge devices. Our method aims to reduce marginal detection outputs of models according to application domains (e.g., city, park, factory, etc). We classify crucial objects (i.e., pedestrian, car, bench, etc) for a specific domain and adopt a transfer learning in which the learning is solely towards the selected objects. Such approach improves detection accuracy even for a compressed deep learning model like tiny versions of a YOLO (you only look once) framework. From the experiments, we validate that the method empowers the YOLOv7-tiny can provide the comparable detection accuracy with a YOLOv7 model despite of 83% less parameters than that of the original model. Besides, we confirm that our method achieves 389% faster inference on resource-constrained edge devices (i.e., NVIDIA Jetsons) than the YOLOv7.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call