Abstract
The Graphics Processing Unit (GPU) is now used extensively for general purpose GPU programming (GPGPU), allowing for greater exploitation of the multi-core model across many application domains. This is particularly true in cloud/edge/fog computing, where multiple GPU enabled servers support many different end user services. This move away from the naturally parallel domain of graphics can incur significant performance issues. Unlike the CPU, code that is hindered from execution due to blocking/waiting on the GPU can affect thousands of threads, rendering the advantages of a GPU irrelevant and reducing a highly parallel environment down to a serial one in the worst case. In this paper we present a solution that minimises blocking/waiting in GPGPU computing using a contention manager that offsets memory conflicts across threads through thread re-ordering. We consider conflicts of memory not only to avoid corruption (standard for transactional memory) but also in the semantic layer of application logic (e.g., enforcing ordering to ensure money drawn from bank account occurs after all deposits). We demonstrate how our approach is successful across a number of industry benchmarks and compare our approach to the only other related solution. We also demonstrate that our approach is scalable in terms of thread numbers (a key requirement on the GPU). We believe this is the first work of its kind demonstrating a generalised conflict and semantic contention manager suitable for the scale of parallel execution found on a GPU.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.