Abstract

General purpose graphic processor unit (GPGPU) supports various applications' execution in different fields with high-performance computing capability due to its powerful parallel processing architecture. However, GPGPU parallel processing architecture also has the “memory wall” issue. When memory access in application is intensive or irregular, memory resource competition occurs and then degrade the performance of memory system. In addition, with multithreads' requirement for different on-chip resources such as register and warp slot being inconsistant, as well as the branch divergence irregular computing applications, the development of thread level parallelism (TLP) is severely restrited. Due to the restrictions of memory access and TLP, the acceleration capability of GPGPU large-scale parallel processing architecture has not been developed effectively. Alleviating memory resource contention and improving TLP is the performance optimization hotspot for current GPGPU architecture. In this paper we research how memory access optimization and TLP improvement could contribute to the optimization of parallel processing architecture performance. First we find that memory access optimization could be accomplished by three ways: reducing the number of global memory access, improving memory access latency hiding capability and optimizing cache subsystem performance. Then in order to improve TLP, optimizing thread allocation scheme, developing data approximation and redundancy, as well as compacting branch divergence, researches of these three aspects are surveyed. We also analyze the working mechanism, advantages and challenges of each research. At the end, we suggest the direction of future GPGPU parallel processing architecture optimization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.