Abstract
Multilayer perceptron (MLP) networks trained using backpropagation can be slow to converge in many instances. The primary reason for slow learning is the global nature of backpropagation. Another reason is the fact that a neuron in an MLP network functions as a hyperplane separator and is therefore inefficient when applied to classification problems in which decision boundaries are nonlinear. This paper presents a data representational approach that addresses these problems while operating within the framework of the familiar backpropagation model. We examine the use of receptors with overlapping receptive fields as a preprocessing technique for encoding inputs to MLP networks. The proposed data representation scheme, termed ensemble encoding, is shown to promote local learning and to provide enhanced nonlinear separability. Simulation results for well known problems in classification and time-series prediction indicate that the use of ensemble encoding can significantly reduce the time required to train MLP networks. Since the choice of representation for input data is independent of the learning algorithm and the functional form employed in the MLP model, nonlinear preprocessing of network inputs may be an attractive alternative for many MLP network applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.