Abstract

In this paper, fast sparse deep neural networks that aim to offer an alternative way of learning in a deep structure are proposed. We examine some optimization algorithms for traditional deep neural networks and find that deep neural networks suffer from a time-consuming training process because of a large number of connecting parameters in layers and layers. To reduce time consumption, we propose fast sparse deep neural networks, which mainly consider the following two aspects in the design of the network. One is that the parameter learning at each hidden layer is given utilizing closed-form solutions, which is different from the BP algorithm with iterative updating strategy. Another aspect is that fast sparse deep neural networks use the summation method of a multi-layer linear approximation to estimate the output target, which is a different way from most deep neural network models. Unlike the traditional deep neural networks, fast sparse deep neural networks can achieve excellent generalization performance without fine-tuning. In addition, it is worth noting that fast sparse deep neural networks can also effectively overcome the shortcomings of the extreme learning machine and hierarchical extreme learning machine. Compared to the existing deep neural networks, enough experimental results on benchmark datasets demonstrate that the proposed model and optimization algorithms are feasible and efficient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call