This paper presents a parametric low-dimensional (LD) representation learning method that allows to reconstruct high-dimensional (HD) input vectors in an unsupervised manner. Under the assumption that the HD data and its LD representation share the same or similar local sparse structure, the proposed method achieves reconstructible dimensionality reduction via jointly learning dictionaries in both the original HD data space and its LD representation space. By regarding the sparse representation as a smooth function with respect to a specific dictionary, we construct an encoding–decoding block for learning LD representations from sparse coefficients of HD data. It is expected that this learning process preserves the desirable structure of HD data in the LD representation space, and simultaneously allows a reliable reconstruction from the LD space back to the original HD space. In addition, the proposed single layer encoding–decoding block can be easily extended to deep learning structures. Numerical experiments on both synthetic data sets and real images show that the proposed method achieves strongly competitive and robust performance in data DR, reconstruction, and synthesis, even on heavily corrupted data. The proposed method can be used as an alternative approach to compressive sensing (CS); however, it can outperform the traditional CS methods in: 1) task-driven learning problems, such as 2-D/3-D data visualization, and 2) data reconstruction at a lower dimensional space.
Read full abstract