Abstract
In this paper, two different types of neural networks are investigated and employed for the online solution of strictly-convex quadratic minimization; i.e., a two-layer back-propagation neural network (BPNN) and a discrete-time Hopfield-type neural network (HNN). As simplified models, their error-functions could be defined directly as the quadratic objective function, from which we further derive the weight-updating formula of such a BPNN and the state-transition equation of such an HNN. It is shown creatively that the two derived learning-expressions turn out to be the same (in mathematics), although the presented neural-networks are evidently different from each other a great deal, in terms of network architecture, physical meaning and training patterns. Computer-simulations further substantiate the efficacy of both BPNN and HNN models on convex quadratic minimization and, more importantly, their common nature of learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.