Abstract
An increasing number of emerging applications in data science and engineering are based on multidimensional and structurally rich data. The irregularities, however, of high-dimensional data often compromise the effectiveness of standard machine learning algorithms. We hereby propose the Rank- $R$ Feedforward Neural Network (FNN), a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters, thereby offering two core advantages compared to typical machine learning methods. First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension. Moreover, the number of the model’s trainable parameters is substantially reduced, making it very efficient for small sample setting problems. We establish the universal approximation and learnability properties of Rank- $R$ FNN, and we validate its performance on real-world hyperspectral datasets. Experimental evaluations show that Rank- $R$ FNN is a computationally inexpensive alternative of ordinary FNN that achieves state-of-the-art performance on higher-order tensor data.
Highlights
Large sets of high-order data have become ubiquitous across science and engineering disciplines, primarily due to recent advances in sensing technologies and increasingly affordable recording devices
We introduce a tensor-based non-linear learning model, called Rank-R Feedforward Neural Network (FNN)
The strength of Rank-1 FNN lies in the reduction of the number of trainable parameters compared to an ordinary Fully Connected Feedforward Neural Network (FCFNN)
Summary
Large sets of high-order data have become ubiquitous across science and engineering disciplines, primarily due to recent advances in sensing technologies and increasingly affordable recording devices. RELATED WORK Several supervised and unsupervised learning methods have been proposed for analyzing data in tensor format, including High Order SVD, Tucker and CP decompositions, [14], Multi-linear PCA [15], probabilistic decompositions [16]–[18], and Common Mode Patterns [19] Such methods, known as subspace learning, project raw data into lower dimensional spaces and consider these projections as highly descriptive features of raw information. All these advantages have been revealed by applying the proposed Rank-R FNN model for hyperspectral data classification over benchmarked datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.