Abstract

A power flow study aims to analyze a power system by obtaining the voltage and phase angle of buses inside the power system. Power flow computation basically uses a numerical method to solve a nonlinear system, which takes a certain amount of time because it may take many iterations to find the final solution. In addition, as the size and complexity of power systems increase, further computational power is required for power system study. Therefore, there have been many attempts to conduct power flow computation with large amounts of data using parallel computing to reduce the computation time. Furthermore, with recent system developments, attempts have been made to increase the speed of parallel computing using graphics processing units (GPU). In this review paper, we summarize issues related to parallel processing in power flow studies and analyze research into the performance of fast power flow computations using parallel computing methods with GPU.

Highlights

  • Power flow (PF) analysis is popularly employed to analyze electrical power systems by estimating the voltage and phase angle of buses inside the power system

  • Many studies have been conducted on parallel processing of LU decomposition, which is frequently used in the NR method, and there have been some studies using QR decomposition associated with matrix handling

  • The speedup is measured in comparison to the existing commercial library, and in other cases, the performance is evaluated compared to the CPU alone

Read more

Summary

Introduction

Power flow (PF) analysis is popularly employed to analyze electrical power systems by estimating the voltage and phase angle of buses inside the power system. Modern GPUs have been developed in the form of general-purpose computing on graphics processing units (GPGPUs) as a substitute for CPUs in HPC for various scientific algorithms, such as deep learning, genome mapping, and power flow analysis. This is because GPUs significantly outperform CPUs in terms of the cost-effectiveness of floating-point computational throughput. The GPU is specialized to dramatically accelerate computation-intensive tasks due to its massive parallel architecture, which employs a large number of streaming multiprocessors (SMs) consisting of 32 scalar processors (SPs) that operate in a lockstep manner. All threads in each warp are executed simultaneously on a single

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call