Abstract

The conventional form of Amdahl’s law states that speedup of calculations in a multiprocessor machine is limited by the definite constant value just due to the existence of some non-parallelizable part in any algorithm. This brief paper considers one more general reason, which prevents a growth of parallel performance: processes that implement distributed task cannot start simultaneously and hence every process adds some start-up time, also reducing by that the gain from a parallel processing. The simple formula, proposed here to extend Amdahl’s law, leads to a less optimistic picture in comparison with classical results: for large amount of processor units the modified law does not approach to constant but vanishes. This is the result of competition between two factors: decreasing of calculation duty and increasing of start-up time when a number of parallel processes grows. The effect may be subdued by means of specific regularity in launching parallel processes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.