Abstract

SummaryGraphics processing units (GPUs) are unarguably vital to keep up with the perpetually growing demand for compute capacity of data‐intensive applications. However, the overhead of transferring data between host and GPU memory is already a major limiting factor on the single‐node level. The situation intensifies in scale‐out scenarios, where data movement is becoming even more expensive. By augmenting the CloudCL framework with 842‐based compression facilities, this article demonstrates that transparent on‐the‐fly I/O link compression can yield performance improvements between 1.11× and 2.07× across tested scale‐out GPU workloads.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call