Abstract
In this paper, firstly, a class of deep (or multi-layer) neural networks with polynomial activation functions (or polynomial activation neural networks: PANN) are created, while feed-forward and recurrent architectures and their nonlinear difference modeling are explicated. PANN’s relationships with conventional deep neural networks with sigmoid activation functions are discussed briefly by means of Taylor series. Secondly, numerical stability and stabilization of PANN’s are examined, and the stability conditions are derived with bounded-state trajectory inequalities and small-state linear approximation, under small parametrization assumption; stability analysis implication coincides with what we already learnt about neural network pre-training. Thirdly, based on what we term the coverage back-propagation parametrization, pre-training algorithms for PANN with or without activation functions optimization are constructed; particularly, activation function optimization is a new concept of this study, which brings us with more learning flexibility in general neural networks. Finally, nonlinear function fitting is numerically illustrated to show application of PANN, which reveals high generalization capability of linear parameter-varying neural algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.