Abstract

In recent years, kernel density estimation has been exploited by computer scientists to model several important problems in machine learning, bioinformatics, and computer vision. However, in case the dimension of the data set is high, then the conventional kernel density estimators suffer poor convergence rates of the pointwise mean square error (MSE) and the integrated mean square error (IMSE). Therefore, design of a novel kernel density estimator that overcomes this problem has been a great challenge for many years. This paper proposes a relaxed model of the variable kernel density estimation and analyzes its performance in data classification applications. It is proved in this paper that, in terms of pointwise MSE, the convergence rate of the relaxed variable kernel density estimator can approach O(n/sup -1/) regardless of the dimension of the data set, where n is the number of sampling instances. Experiments with the data classification applications have shown that the improved convergence rate of the pointwise MSE leads to higher prediction accuracy. In fact, the experimental results have also shown that the data classifier constructed based on the relaxed variable kernel density estimator is capable of delivering the same level of prediction accuracy as the SVM with the Gaussian kernel.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call