Abstract

Through-wall human detection has vital and widely used applications for anti-terrorism, anti-explosion, and post-disaster relief. The through-wall human-target recognition using ultra-wideband radar-based technology was established in recent research. With the recent development of deep learning algorithms, classification algorithms have demonstrated a dynamic aptitude to learn important characteristics of the dataset by utilizing only a few sample sets. This paper focuses on studying the detection of a human target’s status behind wall in small sample conditions. In the deep learning network model, the autoencoder algorithm is chosen here to classify and identify human targets behind walls. Through automatic acquiring of the knowledge of inherent characteristics in the data, the autoencoder algorithm can extract the concise data-feature representations. Based on the autoencoder network, we add the denoising encoder and sparsity constraints to extract more efficient feature representations, thereby improving the classification and identification rates. In this paper, we classify and identify the behind-wall human-target states separately under single and multiple sensors under a small-sample condition, and then compare the results with those of other classification algorithms. The results illustrate that the use of multiple sensors is more effective than the use of a single sensor and that the adopted autoencoder algorithm enables more effective detection of human targets behind walls than other algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call