Abstract

Self-training algorithm highlights the speed of training a supervised classifier through small labeled samples and large unlabeled samples. Despite its long considerable success, self-training algorithm has suffered from mislabeled samples. Local noise filters are designed to detect mislabeled samples. However, two major problem with this kind of application are: (a) Current local noise filters have not treated the spatial distribution of the nearest neighbors in different classes in much detail. (b) They are being disadvantaged when mislabeled samples are located in overlapping areas of different classes. Here, we develop an integrated architecture – self-training algorithm based on density peaks combining globally adaptive multi-local noise filter (STDP-GAMLNF), to improve detecting efficiency. Firstly, the spatial structure of the data set is revealed by density peak clustering, and it is used for empowering self-training to label unlabeled samples. In the meantime, after each epoch of labeling, GAMLNF can comprehensively judge whether a sample is a mislabeled sample from multiple classes or not, and it will reduce the influence of edge samples effectively. The corresponding experimental results conducted on eighteen UCI data sets demonstrate that GAMLNF is not sensitive to the value of the neighbor parameter k, and it is capable of adaptively finding the appropriate number of neighbors of each class.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.