Abstract
The back propagation training algorithm, used to train non-linear feed forward multi-layer artificial neural networks, is capable of estimating the error present in the data presented to a network. While of no use during the training of a network, such information can be useful after training to permit the input data to be itself adjusted to better fit the internal model of a trained neural network. After this has been done, the difference between the modified and original data can be useful. This paper discusses how such data adjusting may be done, demonstrates the results for two simple data sets and suggests some uses that may be made of such differences.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Computational and Theoretical Nanoscience
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.