Abstract

Neural network architecture optimization is often a critical issue, particularly when VLSI implementation is considered. This paper proposes a new minimization method for multilayered feedforward ANNs and an original approach to their synthesis, both based on the analysis of the information quantity (entropy) flowing through the network. A layer is described as an information filter which selects the relevant characteristics until the complete classification is performed. The basic incremental synthesis method, including the supervised training procedure, is derived to design application-tailored neural paradigms with good generalization capability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.