Abstract

Caches play a major role in the performance of high-speed computer systems. Trace-driven simulator is the most widely used method to evaluate cache architectures. However, as the cache design moves to more complicated architectures, along with the size of the trace is becoming larger and larger. Traditional simulation methods are no longer practical due to their long simulation cycles. Several techniques have been proposed to reduce the simulation time of sequential trace-driven simulation. This paper considers the use of generic GPU to accelerate cache simulation which exploits set-partitioning as the main source of parallelism. We develop more efficient parallel simulation techniques by introducing more knowledge into the Compute Unified Device Architecture (CUDA) on the GPU. Our experimental result shows that the new algorithm gains 2.76x performance improvement compared to traditional CPU-based sequential algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.