Abstract

This paper discusses higher order derivative computing for universal learning networks that form a super set of all kinds of neural networks. Two computing algorithms, backward and forward propagation, are proposed. Using a technique called "local description" expresses the proposed algorithms very simply. Numerical simulations demonstrate the usefulness of higher order derivatives in neural network training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call