Abstract

Stochastic configuration networks (SCNs) as a class of randomized learner model have been successfully employed in data analytics due to its universal approximation capability and fast modeling property. The technical essence lies in stochastically configuring the hidden nodes (or basis functions) based on a supervisory mechanism rather than data-independent randomization as usually adopted for building randomized neural networks. Given image data modeling tasks, the use of 1-D SCNs potentially demolishes the spatial information of images, and may result in undesirable performance. This paper extends the original SCNs to a 2-D version, called 2DSCNs, for fast building randomized learners with matrix inputs. Some theoretical analysis on the goodness of 2DSCNs against SCNs, including the complexity of the random parameter space and the superiority of generalization, are presented. Empirical results over one regression example, four benchmark handwritten digit classification tasks, two human face recognition datasets, as well as one natural image database, demonstrate that the proposed 2DSCNs perform favorably and show good potential for image data analytics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.