Abstract
In the past, the speed of computers was mainly increased by increasing the speed of their logic element. Thus, the memory cycle time has increased by two orders of magnitude. Improvements in technology achieved in the last 20 years have increased the speed of processors by as much as three orders. Today, since the physical barrier of the speed of transfer of an electric signal has been reached, it is possible to achieve additional speed only by improving the computer organization or by using it more effectively. Current technology has made it possible for the processors to be combined into large parallel structures, and by a suitable organization of n processors it is possible to reach an n-fold increase in the rate of computation. Parallelism in computation has brought with it new problems both in the creation of new algorithms and programs, and in the design of computer architectures. Parallel algorithms and programs are closely connected with the architecture of parallel computers, and therefore design and analysis of parallel algorithms and programs cannot be considered independently of their implementation and the architecture of the computer on which they are to be implemented. Several examples are known from the history of parallel data processing, where a valuable concept in the design of algorithms, programs or computers has had a large impact on the efficiency of computation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.