The unmanned aerial vehicle (UAV) is prevalent in power inspection. However, due to a limited battery life, turbulent wind, and its motion, it brings some challenges. To address these problems, a reinforcement learning-based energy-saving path-planning algorithm (ESPP-RL) in a turbulent wind environment is proposed. The algorithm dynamically adjusts flight strategies for UAVs based on reinforcement learning to find the most energy-saving flight paths. Thus, the UAV can navigate and overcome real-world constraints in order to save energy. Firstly, an observation processing module is designed to combine battery energy consumption prediction with multi-target path planning. Then, the multi-target path-planning problem is decomposed into iterative, dynamically optimized single-target subproblems, which aim to derive the optimal discrete path solution for energy consumption prediction. Additionally, an adaptive path-planning reward function based on reinforcement learning is designed. Finally, a simulation scenario for a quadcopter UAV is set up in a 3-D turbulent wind environment. Several simulations show that the proposed algorithm can effectively resist the disturbance of turbulent wind and improve convergence.