Abstract

The impressive evolution of neural networks and deep learning techniques during the last few years has offered new incomparable routes to solve many complex problems. Moreover, the fact that neural networks are structured and supervised has made it possible to perform automatic parameter tuning that guarantees convergence to the best expressive model for the problem assessed. In this work, we investigated the use of recurrent neural networks (RNNs) to solve the sequential sparse recovery problem through unfolding the iterative soft thresholding algorithm (ISTA) into a stacked RNN. Specifically, we examined the performance of the unsupervised iterative algorithm and the supervised network for a purely compressive sampling reconstruction problem of time-frequency representations. Our results demonstrated that the trained stacked neural network outperforms the iterative algorithm in the quality of the reconstructed data and points to several future directions to improve the performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call