Abstract

Dense vehicle detection in rush hours is important for intelligent transportation systems. Most existing object detection methods can work well in off-peak vehicle detection for surveillance images. However, they may fail in dense vehicle detection in rush hours due to severe overlapping. To address this problem, a dense vehicle detection network is proposed by embedding the deformable channel-wise column transformer (DCCT) into the current you only look once (YOLO)-v5l network with a novel asymmetric focal loss (AF loss). The proposed DCCT fully extracts the column-wise occlusion information of vehicles in the images and guides the network to pay more attention to the visible area of partially occluded vehicles to improve the detection and positioning accuracy of weak feature targets. The proposed AF loss is used to balance the performance between easy and hard targets and address class imbalance. Extensive results demonstrate that the proposed network can accurately detect on-road densely located vehicles, even the minority classes in real time. Compared with the baseline YOLO-v5l, the mean average precision is improved by 3.93%, and it achieves comparable results with the existing state-of-the-art methods on the UA_Detrac dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.