Abstract

In traditional link level simulation, multiple-input and multiple-output (MIMO) channel model is one of the most time-consuming modules. When using more realistic geometry-based channel models, it consumes more time. In this paper, we propose an efficient simulator implementation of geometry-based spatial channel model (SCM) on graphics processing unit (GPU). We first analyze the potential parallelism of the SCM module. The SCM simulation includes generating channel coefficients, generating additive white Gaussian noise (AWGN), filtering input signals and adding noise. Secondly, we implement all those parallelizable sub-modules on GPU using the open computing language (OpenCL). Then, a lot of effective GPU accelerating approaches are employed to make all those GPU functions highly optimized. The approaches include out-of-order command queue, merging data, sharing local memory and vectorization. At last, we verify our approaches on Nvidia's mid-range GPU GTX660. The experiment result shows that our newly proposed GPU implementation achieves more than 1000 times speedup compared with the implementation on traditional central processing unit (CPU). The simulation time is close to the processing time of transmitter and receiver, which makes it possible to construct a real-time channel simulator of link level for long term evolution (LTE) or LTE-advanced system and software-defined radio. As far as we know, we are the first to accelerate the SCM model on GPU. The results of this paper should have significant application value in practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call