Abstract
A novel neural network has been devised that combines the advantages of cascade correlation and computational temperature constraints. The combination of advantages yields a nonlinear calibration method that is easier to use, stable, and faster than back-propagation networks. Cascade correlation networks adjust only a single unit at a time, so they train very rapidly when compared to back-propagation networks. Cascade correlation networks determine their topology during training. In addition, the hidden units are not readjusted once they have been trained, so these networks are capable of incremental learning and caching. With the cascade architecture, temperature may be optimized for each hidden unit. Computational temperature is a parameter that controls the fuzziness of a hidden unit's output. The magnitude of the change in covariance with respect to temperature is maximized. This criterion avoids local minima, forces the hidden units to model larger variances in the data, and generates hidden units that furnish fuzzy logic. As a result, models built using temperature-constrained cascade correlation networks are better at interpolation or generalization of the design points. These properties are demonstrated for exemplary linear interpolations, a nonlinear interpolation, and chemical data sets for which the numbers of chlorine atoms in polychlorinated biphenyl molecules are predicted from mass spectra.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.