Abstract
Edge-device-based object detection is crucial in many real-world applications, such as self-driving cars, ADAS, driver behavior analysis. Although deep learning (DL) has become the de-facto approach for object detection, the limited computing resources of embedded devices and the large model size of current DL-based methods increase the difficulty of real-time object detection on edge devices. To overcome these difficulties, in this work a novel YOLOv4-dense model is proposed to detect objects in an accurate, fast manner, which is built on top of the YOLOv4 framework but with substantial improvements. More specifically, lots of CSP layers are pruned since it will decrease inference speed. And to address the losing small objects problem, a dense block is introduced. In addition, a lightweight two-stream YOLO head is also designed to further reduce the computational complexity of the model. Experimental results on NVIDIA JETSON TX2 embedded platform demonstrate that YOLOv4-dense can achieve a higher accuracy, faster speed with smaller model size. For instance, on the KITTI dataset, YOLOv4-dense obtains 84.3% mAP and 22.6 FPS with only 20.3 M parameters, surpassing the state-of-the-art models with comparable parameter budget such as YOLOv3-tiny, YOLOv4-tiny, PP-YOLO-tiny by a large margin.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.