Abstract

The prior knowledge plays an important role in increasing the performance of the support vector machines (SVMs). Traditional SVMs do not consider any prior knowledge of the training set. In this paper, the neighbors’ distribution knowledge is incorporated into SVMs. The neighbors’ distribution can be measured by the sum of the cosine value of the angle, which is between the difference between the sample and its corresponding neighbor, and the difference between the sample and the mean of corresponding neighbors. The neighbors’ distribution knowledge reflects the sample’s importance in the training processing. It can be explained as the relative margin or instance weight. In this paper, the neighbors’ distribution knowledge is regarded as the relative margin and incorporated into the framework of density-induced margin support vector machines whose relative margin is measured by relative density degree. The results of the experiments, performed on both artificial synthetic datasets and real-world benchmark datasets, demonstrate that SVMs performs better after incorporating neighbors’ distribution. Furthermore, experimental results also show that neighbors’ distribution are more suitable than relative density degree to represent the relative margin.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.