Abstract
The functional link neural network (FLNN) increases the input dimension by functionally expanding the input features. In this paper, modifications to the FLNN are proposed for undertaking data classification tasks. The main objective is to optimize the FLNN by formulating a parsimonious network with less complexity and lower computational burden as compared with the original FLNN. The methodology consists of selecting a number of important expanded features to build the FLNN structure. It is based on the rationale that not all the expanded features are equally important in distinguishing different target classes. As such, we modify the FLNN in a way that less—relevant and redundant expanded input features are identified and discarded. In addition, instead of using the back-propagation learning algorithm, adjustment of the network weights is formulated as an optimisation task. Specifically, the genetic algorithm is used for both feature selection as well as weight tuning in the FLNN. An experimental study using benchmark problems is conducted to evaluate the efficacy of the modified FLNN. The empirical results indicate that even though the structure of the modified FLNN is simpler, it is able to achieve comparable classification results as those from the original FLNN with fully expanded input features.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.