Abstract

Low imaging spatial resolution hinders through-the-wall radar imaging (TWRI) from reconstructing complete human postures. This letter mainly discusses a convolutional neural network (CNN)-based human posture reconstruction method for TWRI. The training process follows a supervision-prediction learning pipeline inspired by the cross-modal learning technique. Specifically, optical images and TWRI signals are collected simultaneously using a self-develop radar containing an optical camera. Then, the optical images are processed with a computer-vision-based supervision network to generate ground-truth human skeletons. Next, the same type of skeleton is predicted from corresponding TWRI signals using a prediction network. After training, the model shows complete predictions in wall-occlusive scenarios solely using TWRI signals. Experiments show comparable quantitative results with the state-of-the-art vision-based methods in nonwall-occlusive scenarios and accurate qualitative results with wall occlusion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call