Abstract

This chapter discusses multiprocessor machines. Computers constructed of multiple, independent processors allow for a broad view of building applications. Multiple processor machines come in different flavors, although they all share the features of having more than one processor attached to a memory system. To understand the constraining issues, it is important to study parallelism. Vector processors take advantage of the inherent parallelism of specific operations to gain a speedup over scalar processors. The parallelism achieved using vector processors is only one level of parallelism. Various sources and level of parallelism can be explained as: (1) instruction level, (2) loop level, and (3) task level. A set of processes that must interact through communication are referred to as coordinating processes. Synchronization is a mechanism for processes to notify each other of having reached a specific point in an execution. It is common to divide the set of multiple processor machines into two classes, shared memory machines and distributed memory machines.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.