Abstract
This paper discusses the possibilities for parallel processing of the Full- and Limited-Memory BFGS training algorithms, two powerful second-order optimization techniques used to train Multilayer Perceptrons. The step size and gradient calculations are identified as the critical components in both. The matrix calculations in the Full-Memory algorithm are also shown to be significant for larger problems. Various strategies are considered for parallelisation, the best of which is implemented on PVM and transputer based architectures. The generation of a neural predictive model for a nonlinear chemical plant is used as a control case study to assess parallel performance in terms of achievable speed-up. The transputer implementation is found to give excellent speed-ups but the size of problem that can be trained is limited by memory constraints. On the other hand speed-ups achievable with the PVM implementation are much poorer because of inefficient communication, but memory does not pose a problem.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.