Abstract

The high cost of operation and maintenance can impede the development of offshore wind turbines (OWTs). With advancements in detection technology, condition-based maintenance (CBM) has emerged as a promising approach to managing maintenance. Numerous CBM polices have been studied to achieve a cost-effective maintenance. However, little research has been conducted on discovering the cost-optimal CBM policy for OWTs. With this motivation, a deep reinforcement learning (DRL) framework for exploring the cost-optimal CBM policy is proposed in this paper. Four CBM policies (i.e. uniform or dynamic inspection interval, and fixed or adaptive repair threshold) are formulated as the Markov decision process model. Two DRL algorithms (deep Q network and proximal policy optimization) are applied to derive the dynamic inspection interval and the adaptive repair threshold. To illustrate this framework, a fatigue OWT component is used as an example. The four policies are optimized under varying conditions to find the cost-optimal CBM policy. Moreover, performance comparison of DRL algorithms is performed. The case study illustrates the advantage of DRL in deriving optimal maintenance policies for fatigue-sensitive OWT components and highlights the interrelationship between maintenance policy and economic performance. This paper concludes by discussing the effect of cost elements on the CBM policies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call