Abstract

The enlarging volumes of data resources produced in real world makes classification of very large scale data a challenging task. Therefore, parallel process of very large high dimensional data is very important. Hyper-Surface Classification (HSC) is approved to be an effective and efficient classification algorithm to handle two and three dimensional data. Though HSC can be extended to deal with high dimensional data with dimension reduction or ensemble techniques, it is not trivial to tackle high dimensional data directly. Inspired by the decision tree idea, an improvement of HSC is proposed to deal with high dimensional data directly in this work. Furthermore, we parallelize the improved HSC algorithm (PHSC) to handle large scale high dimensional data based on MapReduce framework, which is a current and powerful parallel programming technique used in many fields. Experimental results show that the parallel improved HSC algorithm not only can directly deal with high dimensional data, but also can handle large scale data set. Furthermore, the evaluation criterions of scaleup, speedup and sizeup validate its efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call