Abstract

Using the CIFAR-10 dataset, this research investigates how parallel processing affects the Random Forest method's machine learning performance. Accuracy and training time are highlighted in the study as critical performance indicators. Two cases were studied, one with and one without parallel processing. The results show the strong prediction powers of the Random Forest algorithm, which continues to analyze data in parallel while retaining a high accuracy of 97.50%. In addition, training times are notably shortened by parallelization, going from 0.6187 to 0.4753 seconds. The noted increase in time efficiency highlights the importance of parallelization in carrying out activities simultaneously, which enhances the training process's computational efficiency. These results provide important new information about how to optimize machine learning algorithms using parallel processing approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call