Abstract
The paper deals with a state-of-art problem, associated with neural networks training. Training algorithm (with special parallelization procedure) implementing the annealing method is proposed. The training efficiency is demonstrated by the example of a neural network architecture focused on parallel data processing. For the color image compression problem, it is shown that the proposed algorithm significantly outperforms gradient
 methods in terms of efficiency. The results obtained make it possible to improve the neural networks training quality in general, and can be used to solve a wide class of applied problems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: HERALD OF POLOTSK STATE UNIVERSITY. Series С FUNDAMENTAL SCIENCES
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.