Abstract

This paper presents a novel approach, called scale equalization (SE), to implement higher-order neural networks. SE is particularly useful in eliminating the scale divergence problem commonly encountered in higher order networks. Generally, the larger the scale divergence is, the more the number of training steps required to complete the training process. Effectiveness of SE is illustrated with an exemplar higher-order network built on the Sigma-Pi network (SESPN) applied to function approximation. SESPN requires the same computation time as SPN per epoch, but it takes much less number of epochs to compete the training process. Empirical results are provided to verify that SESPN outperforms other higher-order neural networks in terms of computation efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call