Abstract

Visual-based vehicle detection has been extensively applied for autonomous driving systems and advanced driving assistant systems, however, it faces great challenges as a partial observation regularly happens owing to occlusion from infrastructure or dynamic objects or a limited vision field. This paper presents a two-stage detector based on Faster R-CNN for high occluded vehicle detection, in which we integrate a part-aware region proposal network to sense global and local visual knowledge among different vehicle attributes. That entails the model simultaneously generating partial-level proposals and instance-level proposals at the first stage. Then, different parts belong to the same vehicle are encoded and reconfigured into a compositional entire proposal through a part affinity fields, allowing the model to generate integral candidates and mitigate the impact of occlusion challenge to the utmost extent. Extensive experiments conducted on KITTI benchmark exhibit that our method outperforms most machine-learning-based vehicle detection methods and achieves high recall in the severely occluded application scenario.

Highlights

  • Vehicle detection occupies a significant position in computer vision field with various applications, such as Intelligent Transportation System (ITS), autonomous driving, and traffic safety, which is committed to generating a series of bounding boxes enclosing vehicle instances on an image

  • Impressive works concerning object detection [1]–[5] are driven by the deep feature automatically extracted from deep convolutional neural networks (CNNs), which are able to generate 2D boxes related to scenes based on bounding box regression techniques

  • In this paper, we developed a novel vehicle detection algorithm focus on occlusion and truncation handling based on vehicle part-based proposals generation and Part Affinity Fields (PAFs)-based combination algorithm

Read more

Summary

INTRODUCTION

Vehicle detection occupies a significant position in computer vision field with various applications, such as Intelligent Transportation System (ITS), autonomous driving, and traffic safety, which is committed to generating a series of bounding boxes enclosing vehicle instances on an image. Conventional approaches aim at narrowing the gap between the predicted bounding box and its designated ground truth merely [7], [8], rarely considering the occlusion occurred among different vehicle semantics parts. These detectors are sensitive to the rigorous threshold of nonmaximum suppression (NMS) in the crowded traffic scenes, wherein filling with inter-object occlusion that increases the difficulty in vehicle localization. Inspired by the bottom-up object detection strategy, we design a new region proposal network, termed part-aware RPN, to enable the detector to quickly capture unique characteristics of vehicles under various occlusions and viewpoints, and to narrow the gap between the proposal and ground truth. We propose a part-aware NMS, which performs NMS sequentially on the vehicle candidates and corresponding part candidates in a cascaded manner, eliminating miss detection of different vehicles under high IoU

RELATED WORKS
PART-AWARE NMS
EXPERIMENTS
ABLATION STUDY
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.