Abstract

Human identification is vital in health monitoring, human-computer interaction, safety detection, and other fields. Compared with traditional vision-based methods, millimeter wave radar sensors can protect users' privacy and work in dark environments, which has a wide range of application prospects in iot fields such as smart homes and smart medical care. Previous studies need to manually collect labeled data, which makes the data collection work need substantial human resources and is unsuitable for popularization and application. We automatically collect multi-modal radar signals in users ' daily lives without requiring researchers to label data manually. Based on the proposed data collection method, we established the first semi-supervised data set for human identification, which includes synchronous radar point cloud data and range-velocity map data. The dataset contains four experiments, including ten monitoring users and ten other users. We propose a semi-supervised co-training framework based on multi-modal data fusion for human identification. The framework guides the models to learn from unlabeled data using the complementary characteristics of point cloud data and range-velocity map data. In addition, we propose an information fusion method to fuse the radar data of two modes to further improve the model's performance. The experimental results show that the proposed method achieves 93.7% human identification accuracy, showing radar-based human identification technology's application and promotion potential.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call