Abstract
Compressed sensing (CS) is a signal processing framework, which reconstructs a signal from a small set of random measurements obtained by measurement matrices. Due to the strong randomness of measurement matrices, the reconstruction performance is unstable. Additionally, current reconstruction algorithms are relatively independent of the compressed sampling process and have high time complexity. To this end, a deep learning based stacked sparse denoising autoencoder compressed sensing (SSDAE_CS) model, which mainly consists of an encoder sub-network and a decoder sub-network, is proposed and analyzed in this paper. Instead of traditional linear measurements, a multiple nonlinear measurements encoder sub-network is trained to obtain measurements. Meanwhile, a trained decoder sub-network solves the CS recovery problem by learning the structure features within the training data. Specifically, the two sub-networks are integrated into SSDAE_CS model through end-to-end training for strengthening the connection between the two processes, and their parameters are jointly trained to improve the overall performance of CS. Finally, experimental results demonstrate that the proposed method significantly outperforms state-of-the-art methods in terms of reconstruction performance, time cost, and denoising ability. Most importantly, the proposed model shows excellent reconstruction performance in the case of a few measurements.
Highlights
With the increasing demand of information processing, the information sampling rate and device processing speed of signal processing framework are getting higher increasingly
2.1 Overall framework of SSDAE_CS model This paper proposes a deep learning model named stacked sparse denoising autoencoder compressed sensing which integrates advantage of denoising and sparse autoencoders into CS theory
Through end-to-end training, compressed sampling and signal reconstruction process are perfectly integrated into the proposed model to improve the overall performance of CS
Summary
With the increasing demand of information processing, the information sampling rate and device processing speed of signal processing framework are getting higher increasingly. Gaussian [10] and Bernoulli [11] random matrices are used as the sampling matrices in most previous works because they meet the restricted isometry property [12] with a large probability. It always suffers some problems such as high computation cost, vast storage, and uncertain reconstruction qualities.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have