Abstract

Dividing computation into subtasks that can be executed on separate processing elements is a very difficult task. Dataflow systems represent an extreme case where each machine instruction is an independent subcomputation. As a consequence, the execution overhead is very high. In this paper, we present an execution model for dataflow where the unit of computation is not a single instruction. Rather, the dataflow graph is divided into paths according to their data dependencies. Each path is then treated as a very simple process: it is loaded into memory; it switches between ready, running, and block states; and it communicates with other such processes through messages. The main advantage of the proposed approach over conventional approaches to parallelism is that there is a mechanical way of creating subcomputations that can be executed in parallel. At the same time, this approach does not suffer from the inefficiencies inherent to purely dataflow systems. Instead, it permits the granularity of processes to be adjusted to achieve a balance between the amount of parallelism that can usefully be exploited and the amount of sequential execution that can be handled effectively within one process.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.