Abstract

Upcoming processor architectures support parallel processing on different levels. Multiple processing elements (PEs) run in parallel. The PEs consists of several functional units and the functional units allow sub-word parallelism (SWP), i.e. the parallel execution of operations with low data word width. In this paper, a parameterized mapping of algorithms onto massively parallel processor architectures (PAs) is derived which exploits both parallelism on PA and SWP on PE level. It establishes a correlation between the parameters of the algorithms and the parameters of the PA, which enables optimization strategies with respect to several expense factors of the PA. The design approach is based on the co-partitioning method and the partitioning of data dependencies. Both are used in a hierarchical manner. Besides the parameters of the PA (such as shape, number of PEs, number of sub-words processed in parallel, channels between the PEs, and their delay), the packing instructions for exploiting SWP can be deduced

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.