Abstract

This paper discusses the relationship between parallelism granularity and system overhead of dataflow computer systems, and indicates that a trade-off between them should be determined to obtain optimal efficiency of the overall system. On the basis of this discussion, a macro-dataflow computational model is established to exploit the task-level parallelism. Working as a macro-dataflow computer, an Experimental Distributed Dataflow Simulation System (EDDSS) is developed to examine the effectiveness of the macro-dataflow computational model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call