Abstract
With the rapid advancements in the field of autonomous driving, the need for faster and more accurate object detection frameworks has become a necessity. Many recent deep learning-based object detectors have shown compelling performance for the detection of large objects in a variety of real-time driving applications. However, the detection of small objects such as traffic signs and traffic lights is a challenging task owing to the complex nature of such objects. Additionally, the complexity present in a few images due to the existence of foreground/background imbalance and perspective distortion caused by adverse weather and low-lighting conditions further makes it difficult to detect small objects accurately. In this letter, we investigate how an existing object detector can be adjusted to address specific tasks and how these modifications can impact the detection of small objects. To achieve this, we explore and introduce architectural changes to the popular YOLOv5 model to improve its performance in the detection of small objects without sacrificing the detection accuracy of large objects, particularly in autonomous driving. We will show that our modifications barely increase the computational complexity but significantly improve the detection accuracy and speed. Compared to the conventional YOLOv5, the proposed iS-YOLOv5 model increases the mean Average Precision (mAP) by 3.35% on the BDD100K dataset. Nevertheless, our proposed model improves the detection speed by 2.57 frames per second (FPS) compared to the YOLOv5 model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.