Abstract

We present two new parallel implementations of the ensemble learning methods Random Forests (RF) and Extremely Randomized Trees (ERT), called gpuRF and gpuERT, for emerging many-core platforms, e.g., contemporary graphics cards suitable for general-purpose computing (GPGPU). RF and ERT are two ensemble methods for generating predictive models that are of high importance within machine learning. They operate by constructing a multitude of decision trees at training time and outputting a prediction by comparing the outputs of the individual trees. Thanks to the inherent parallelism of the task, an obvious platform for its computation is to employ contemporary GPUs with a large number of processing cores. Previous parallel algorithms for RF in the literature are either designed for traditional multi-core CPU platforms or early history GPUs with simpler architecture and relatively few cores. For ERT, only briefly sketched parallelization attempts exist in the literature. The new parallel algorithms are designed for contemporary GPUs with a large number of cores and take into account aspects of the newer hardware architectures, such as memory hierarchy and thread scheduling. They are implemented using the C/C++ language and the CUDA interface to attain the best possible performance on NVidia-based GPUs. An experimental study comparing the most important previous solutions for CPU and GPU platforms to the novel implementations shows significant advantages in the aspect of efficiency for the latter, often with several orders of magnitude.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call