Abstract

The issues of memory latency, synchronization, and distribution costs in multiprocessors are reviewed. The approaches taken by conventional and data flow architectures are contrasted in relation to these issues. It is pointed out that each approach fits well for a certain situation and that it is possible to have a common framework in which the two execution models can be mixed to suit the situation. An architecture is sketched by describing a few extensions to a conventional processor. While existing programs run without much change, it is shown that improvements could be made by using the new features to tolerate memory latency and to exploit more parallelism. It is shown that data flow graphs can be executed on this to exploit as much parallelism as a data flow architecture could. It is argued that such a testbed will provide for a selective translation of program segments into the appropriate execution model. A fast context switching hardware is presumed for this architecture.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call