Abstract
Background. There are a large number of neural networks that have their advantages and disadvantages, for example, simple, fast and easy to use single-stranded perceptrons are suitable for linear and linearized regression tasks, and more complicated neural networks are expendable in training and prediction time. Therefore, the problem arises for the development of fast and efficient algorithms for training artificial neural networks. An additional factor for researching new methods for training neural networks is finding the smallest training and prediction errors.Objective. The aim of the paper is to search and analyze the properties of the most effective method of training artificial neural networks using a combined approximation of the response surface. Another step is to perform computational experiments on proposed artificial neural networks and compare the results of experiments with known and developed methods.Methods. Analysis of known methods of combined approximation of the response surface was used. New algorithms for training neural networks, based on clustering of data using k-means method were developed. The algorithm with the smallest errors of artificial neural network learning and data prediction is chosen.Results. The results of research of different methods of training of artificial neural networks are given. Peculiarities of the methods of combined approximation of the response surface are analyzed. It is shown that the two methods of combined approximation of the response surface for training of artificial neural networks and prediction confirm the effectiveness of the proposed approach. Combined approximation algorithm is selected, which provides the lowest learning and forecasting errors.Conclusions. It was investigated that developed methods of combined approximation of the response surface allow training neural networks and predicting data with less error than when using autoregressive model with moving average, multilayer perceptron or artificial neural networks of models of geometric transformations without additional data processing.
Highlights
There are a large number of neural networks that have their advantages and disadvantages
more complicated neural networks are expendable in training and prediction time
the problem arises for the development
Summary
There are a large number of neural networks that have their advantages and disadvantages, for example, simple, fast and easy to use single-stranded perceptrons are suitable for linear and linearized regression tasks, and more complicated neural networks are expendable in training and prediction time. The problem arises for the development of fast and efficient algorithms for training artificial neural networks. An additional factor for researching new methods for training neural networks is finding the smallest training and prediction errors
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Research Bulletin of the National Technical University of Ukraine "Kyiv Politechnic Institute"
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.