Abstract

Graphics processing unit (GPU) has been applied successfully in many computation and memory intensive realms due to its superior performances in float-pointing calculation, memory bandwidth and power consumption, and has great potential in power system applications. Contingency screening is a major time consuming part of contingency analysis. In the absence of relevant existing research, this paper is the first of its kind to propose a novel GPU-accelerated algorithm for direct current (DC) contingency screening. Adapting actively unique characteristics of GPU software and hardware, the proposed GPU algorithm is optimized from four aspects: data transmission, parallel task allocation, memory access, and CUDA (Compute Unified Device Architecture) stream. Case studies on a 3012-bus system and 8503-bus system have shown that the GPU-accelerated algorithm, in compared with its counterpart CPU implementation, can achieve about 20 and 50 times speedup respectively. This highly promising performance has demonstrated that carefully designed performance tuning in conjunction with GPU programing architecture is imperative for a GPU-accelerated algorithm. The presented performance tuning strategies can be applicable to other GPU applications in power systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call