Abstract

Dynamic data structures are the key to many highly efficient and optimized implementations. On CPU, dynamic data structures can grow and shrink at run time by allocating and de-allocating memory from a place called heap and link those blocks using pointers. Adaptive data structure means changing the internal properties and structures of the data structure at run time according to requirement for various purposes, known as adaptive use of data structure. When we consider parallelism on GPUs using CUDA, there are many limitations on what can be used as data structure on GPU's global and shared memory. Generally simple arrays are handled on the GPU memory. Programmers need to find ways to present different data structures in terms of multiple arrays. In this paper, we have studied and implemented dynamic data structures & adaptive methodologies on GPU. We majorly explore the concept dynamic parallelism available in latest CUDA devices by implementing and analyzing quick sort and kernel call wise break down on latest NVIDIA's Kepler Architecture GPU. We also experimented with basic array operations on GPU by implementing minimum number finding. Implementation using CUDA results in very high performance gain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call