Abstract

Recently, radar-based human activity recognition (HAR) has attracted the attention of researchers as it has been proven that a deep learning (DL) model can be automatically determined by learning a radar dataset. However, unlike general optical image data, the process of collecting and labeling radar data for training the DL model requires considerable manpower and costs. Therefore, an approach that can learn the maximum number of features from a limited radar dataset is essential. Moreover, even if the DL models are trained using a dataset obtained from multiple geometries, performance can be degraded for geometries that are unknown to the trained model. Therefore, we propose a novel radar-based HAR combining a range–time–Doppler (RTD) map and a range-distributed-convolutional neural network (RD-CNN). Unlike the time–Doppler (TD) map, which is mainly used for radar-based HAR, the proposed RTD map provides several human activity-related features by extending the TD map to three dimensions according to the range. The proposed RD-CNN is a new DL model that performs HAR by using the RD layer to extract Doppler features, excluding the range information of RTD map. To verify the performance of the proposed model, experiments were conducted by using the University of Glasgow “radar signatures of human activities,” which is an open dataset for radar-based HAR research. The comparison results of the proposed model and CNNs with the same number of parameters demonstrated a higher recognition accuracy and a lower recognition error even for unknown geometries in the training dataset.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.