Abstract

Parallel computing is applied across various technology industries for solving complex scientific problems that desire high performance and efficiency while can help utilize the resource to save time, money, and energy. This technique has been developed over decades as the physical limits of chips have been reached. The development of parallel computing is closely bonded with the evolution of hardware and software implementation, and there are many forms of parallelism applied to different systems to improve their performance. This paper focused on threaded programming by applying parallelism to a decision tree program and testing the program performance with variables of thread number and device configuration while aiming to demonstrate the limitation of parallelism from both software and hardware aspects. The experiment's outcome in this paper shows that the overall trend of the performance variation for different devices is similar. Still, an improvement in the performance will be affected by the hardware implementation. In contrast, a device with a better configuration performs better. To summarize this research, parallelism can only be fully utilized when applied to a properly designed program running on a robust machine. Therefore, the innovation of hardware and software is essential to the future development of parallel computing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call