Abstract

The subject of neural networks is broad and deep, covering disciplines ranging from medicine to microelectronics. Neural network tools (NNTs) are derived from the massively parallel biological structures found in brains. This chapter discusses this derivation. An NNT is an analysis tool that is modeled after the massively parallel structure of the brain. It simulates a highly interconnected, parallel computational structure with many relatively simple individual processing elements, namely, neurodes. Individual neurodes are gathered together into groups called slabs. Slabs can receive input (input slabs), provide output (output slabs), or be inaccessible to both input and output, with connections only to other slabs (internal slabs). Neural network tools are characterized in three ways. The first way is the architecture of the NNT, which is the particular way in which the slabs are interconnected and receive input and output. The second method is the transfer function of the slabs, that is, the function that describes the output of a neurode, given its input. The third method is the learning paradigm used for training the network. These three characteristics can be thought of as the top-level attributes of an NNT.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.