Abstract

This chapter discusses parallelism and its problems. Massively parallel computing systems have appealed and continue to appeal to many people, and they appeal for a variety of different reasons. The importance of parallelism is that it provides an avenue, the only avenue by which it will be possible to go on achieving higher performance when the limit of speed for a uniprocessor is reached. The urgency they attach to work on parallelism depends on the imminence with which they perceive that stage being reached. The term multiprocessor refers to a system in which a number of processors work out of the same memory. A multicomputer consists of separate computers, each with its own memory, communication between them taking place by messages passing along fixed links. A multiprocessor, with a number of identical processors and a number of memory modules interconnected via a bus, is known as a symmetric multiprocessor. Currently, there is interest in symmetric multiprocessors in which the individual processors are of supercomputer performance. These are seen in the supercomputer world as providing scope for gaining speed by parallelism, while keeping within the mainstream of computer development.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.