Abstract

Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent and thus has a slow convergence. In addition, softmax, as a decision layer, may ignore the distribution information of the data during classification. Aiming to tackle the referred problems, we propose a novel manifold neural network based on non-gradient optimization, i.e., the analytical-form solutions. Considering that the activation function is generally invertible, we reconstruct the network via forward ridge regression and low-rank backward approximation, which achieve rapid convergence. Moreover, by unifying the flexible Stiefel manifold and adaptive support vector machine, we devise the novel decision layer which efficiently fits the manifold structure of the data and label information. Consequently, a jointly non-gradient optimization method is designed to generate the network with analytical-form results. Furthermore, an acceleration strategy is utilize to reduce the time complexity for handling high dimensional datasets. Eventually, extensive experiments validate the superior performance of the model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call