Abstract
In this paper, we propose a new method for decomposing pattern classification problems based on the class relations among training data. By using this method, we can divide a K-class classification problem into a series of ((2)(K)) two-class problems. These two-class problems are to discriminate class Ci from class Cj for i=1, ..., K and j = i+1, while the existence of the training data belonging to the other K-2 classes is ignored. If the two-class problem of discriminating class Ci from class Cj is still hard to be learned, we can further break down it into a set of two-class subproblems as small as we expect. Since each of the two-class problems can be treated as a completely separate classification problem with the proposed learning framework, all of the two-class problems can be learned in parallel. We also propose two module combination principles which give practical guidelines in integrating individual trained network modules. After learning of each of the two-class problems with a network module, we can easily integrate all of the trained modules into a min-max modular (M3) network according to the module combination principles and obtain a solution to the original problem. Consequently, a large-scale and complex K-class classification problem can be solved effortlessly and efficiently by learning a series of smaller and simpler two-class problems in parallel.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.