Abstract

Parallel and Distributed Computing Using MPI on Raspberry Pi Cluster

Highlights

  • Distributed computing is the process of combining the power of several computing tasks, which are logically distributed and may even be geologically distributed, to collaboratively run a single computational task in a transparent and coherent way, so that they appear as a single, centralized system

  • This research was primarily conducted as an independent research project by Parallel and Distributed Computing Using message parsing interface (MPI) on Raspberry Pi Cluster for University of Computer Studies, Monywa (UCS-Monywa)

  • This paper represents the outcome of a hands-on opportunity to better understand distributed computing, parallel performance using MPI libraries, how to work odd even transition sorting algorithm on pi cluster and its potential benefits to higher education

Read more

Summary

INTRODUCTION

Distributed computing is the process of combining the power of several computing tasks, which are logically distributed and may even be geologically distributed, to collaboratively run a single computational task in a transparent and coherent way, so that they appear as a single, centralized system. MPI is a de facto standard for parallel programming on distributed memory systems. This research was primarily conducted as an independent research project by Parallel and Distributed Computing Using MPI on Raspberry Pi Cluster for University of Computer Studies, Monywa (UCS-Monywa). Portability arises from the standard API and the existence of MPI libraries on a wide range of machines. It is the most common method of programming parallel and distributed system. MPI is considered today’s standard in message passing library. This paper represents the outcome of a hands-on opportunity to better understand distributed computing, parallel performance using MPI libraries, how to work odd even transition sorting algorithm on pi cluster and its potential benefits to higher education

The Art of High Performance Computing
Power of Cluster Computing Requirements
Speedup for Parallel Computing
RELATED RESEARCH WORKS
SYSTEM DESIGN AND IMPLEMENTATION
EXPERIMENTAL RESULTS
CONCLUSION AND FURTHER EXTENSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.