Abstract

The Application Space Architecture (ASA) defines parallel processor design as an adjunct to the Instruction Set Architectures (ISA) defined by Von Neumann for single processors. Before addressing hardware, one must understand applications requiring parallel processors, and software approaches required to meet speed constraints. Parallel processor applications use a single task to run N (number-of-processors) times faster than could be done on a single processor, where Processor Utilization Efficiency (PUE) is the ratio of single processor time to parallel processor time divided by N. Such applications require processing to be split into independent elements running in parallel on a large number of processors. This requires that processors do not sit idle (current PUEs are typically 2-10%.) This sets the requirement for “independent” elements to communicate with each other without stopping. In special cases, clusters of PCs are used to perform parametric analyses running many copies of a simulation on separate computers, typically using different random numbers. Except for initial and terminal processes, these separate simulations hardly communicate while running. Thus they are easily divided into separate tasks that typically exchange information between computers passing data or messages through a top level management task. Such applications (e.g., LINPACK) are labeled “embarrassingly parallel”.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.