Abstract

In this study, a target classification method based on point cloud data in a high-resolution radar sensor is proposed. By using multiple antenna elements arranged in horizontal and vertical directions, pedestrians, cyclists and vehicles can be expressed as point cloud data in the three-dimensional (3D) space. To perform target classification using the spatial characteristics (i.e. length, height and width) of the target, the 3D point cloud data is orthogonally projected onto the xy, yz and zx planes, respectively, and three types of images are generated. Then, a multi-view convolutional neural network (CNN)-based target classifier using those three images as inputs is designed. To this end, a method for synthesising the detection results of three directions in series or in parallel is proposed. The proposed classifier can learn the spatial characteristics of the target by using the detection results of multiple viewpoints. Compared to the CNN-based classifier that uses only the detection result of a single plane as input, the proposed method shows 4.5%p higher classification accuracy in terms of the target with the lowest classification accuracy. In addition, the proposed multi-view CNN structure shows improved classification performance and shorter training time compared to the well-known deep learning methods for image classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call