Abstract
The ridge polynomial neural network is one of the most popular higher-order neural networks, which has the powerful capability of approximating reasonable functions while avoiding the combinatorial increase in the number of weights required. In this paper, we study the convergence of gradient method with batch updating rule for ridge polynomial neural network, and a monotonicity theorem and two convergence theorems (including a weak convergence and a strong convergence) are proved. The experimental results demonstrate that the proposed theorems are valid.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have