Abstract

Neural networks for implementing large networked systems such as smart electric power grids consist of multiple inputs and outputs. Many outputs lead to a greater number of parameters to be adapted. Each additional variable increases the dimensionality of the problem and hence learning becomes a challenge. Cellular computational networks (CCNs) are a class of sparsely connected dynamic recurrent networks (DRNs). By proper selection of a set of input elements for each output variable in a given application, a DRN can be modified into a CCN which significantly reduces the complexity of the neural network and allows use of simple training methods for independent learning in each cell thus making it scalable. This article demonstrates this concept of developing a CCN using dimensionality reduction in a DRN for scalability and better performance. The concept has been analytically explained and empirically verified through application.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.