Abstract

Deep learning has been successfully applied to feature learning in speech recognition, image classification and language processing. However, current deep learning models work in the vector space, resulting in the failure to learn features for big data since a vector cannot model the highly non-linear distribution of big data, especially heterogeneous data. This paper proposes a deep computation model for feature learning on big data, which uses a tensor to model the complex correlations of heterogeneous data. To fully learn the underlying data distribution, the proposed model uses the tensor distance as the average sum-of-squares error term of the reconstruction error in the output layer. To train the parameters of the proposed model, the paper designs a high-order back-propagation algorithm (HBP) by extending the conventional back-propagation algorithm from the vector space to the high-order tensor space. To evaluate the performance of the proposed model, we carried out the experiments on four representative datasets by comparison with stacking auto-encoders and multimodal deep learning models. Experimental results clearly demonstrate that the proposed model is efficient to perform feature learning when evaluated using the STL-10, CUAVE, SANE and INEX datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call