Abstract
In this study, an algorithm for growing neural networks is proposed. Starting with an empty network the algorithm reduces the error of prediction by subsequently inserting connections and neurons. The type of network element and the location where to insert the element is determined by the maximum reduction of the error of prediction. The algorithm builds non-uniform neural networks without any constraints of size and complexity. The algorithm is additionally implemented into two frameworks, which use a data set limited in size very efficiently, resulting in a more reproducible variable selection and network topology. The algorithm is applied to a data set of binary mixtures of the refrigerants R22 and R134a, which were measured by a surface plasmon resonance (SPR) device in a time-resolved mode. Compared with common static neural networks all implementations of the growing neural networks show better generalization abilities resulting in low relative errors of prediction of 0.75% for R22 and 1.18% for R134a using unknown data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.