Abstract
Autonomous driving vehicles need to perceive their immediate environment in order to detect other traffic participants such as vehicles or pedestrians. Vision based functionality using camera images have been widely investigated because of the low sensor price and the detailed information they provide. Conventional computer vision techniques are based on hand-engineered features. Due to the very complex environmental conditions this limited feature representations fail to uniquely identify a specific object. Thanks to the rapid development of processing power (especially GPUs), advanced software frameworks and the availability of large image datasets, Convolutional Neural Networks (CNN) have distinguished themselves by scoring the best on populthis information, the boundingar object detection benchmarks in the research community. Using deep architectures of CNN with many layers, they are able to extract both low-level and high-level features from images by skipping the feature design procedures of conventional computer vision approaches. In this work, an end-to-end learning pipeline for multi-object detection based on one existing CNN architecture, namely Single Shot MultiBox Detector (SSD) [1], with real-time capability, is first reviewed. The SSD detector predicts the object’s position based on feature maps of different resolution together with a default set of bounding boxes. Using the SSD architecture as a starting point, this work focuses on training a single CNN to achieve high detection accuracy for vehicles and pedestrians computed in real time. Since vehicles and pedestrians have different sizes, shapes and poses, independent NNs are normally trained to perform the two detection tasks. It is thus very challenging to train one NN to learn the multi-scale detection ability. The contribution of this work can be summarized as follows:
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.