Abstract

The levellised cost of energy of wave energy converters (WECs) is not competitive with fossil fuel-powered stations yet. To improve the feasibility of wave energy, it is necessary to develop effective control strategies that maximise energy absorption in mild sea states, whilst limiting motions in high waves. Due to their model-based nature, state-of-the-art control schemes struggle to deal with model uncertainties, adapt to changes in the system dynamics with time, and provide real-time centralised control for large arrays of WECs. Here, an alternative solution is introduced to address these challenges, applying deep reinforcement learning (DRL) to the control of WECs for the first time. A DRL agent is initialised from data collected in multiple sea states under linear model predictive control in a linear simulation environment. The agent outperforms model predictive control for high wave heights and periods, but suffers close to the resonant period of the WEC. The computational cost at deployment time of DRL is also much lower by diverting the computational effort from deployment time to training. This provides confidence in the application of DRL to large arrays of WECs, enabling economies of scale. Additionally, model-free reinforcement learning can autonomously adapt to changes in the system dynamics, enabling fault-tolerant control.

Highlights

  • Ocean wave energy is a type of renewable energy with the potential to contribute significantly to the future energy mix

  • Assuming knowledge of the wave excitation force, model predictive control (MPC) computes the control action, typically the force applied by the power take-off system (PTO), that results in optimal energy absorption over a future time horizon using a model of the wave energy converters (WECs) dynamics

  • This paper introduces the world-first deep reinforcement learning (DRL) control method for WECs

Read more

Summary

Introduction

Ocean wave energy is a type of renewable energy with the potential to contribute significantly to the future energy mix. A promising solution to developing an optimal, real-time nonlinear controller for WECs inclusive of constraints on both the state and action is to cast the problem in a dynamic programming framework, to MPC for WEC problems. Model-free RL methods which learn from direct interactions with the environment require a much larger number of samples (in order of 108 as opposed to 104 for complex control tasks [28]) For this reason, to date model-free RL has been applied only to the time-averaged resistive and reactive control of WECs with discrete PTO damping and stiffness coefficients [29,30,31], with lower level controllers necessary to ensure constraints abidance [32]. This paper introduces the world-first deep reinforcement learning (DRL) control method for WECs. The novel approach enables the real-time, nonlinear optimal control of WECs based on model-free RL.

Linear Model of a Heaving Point Absorber
Real-Time Reinforcement Learning Control of a Wave Energy Converter
Problem Formulation
RL Real-Time WEC Control Framework
Case Study
Results in Irregular Waves
Training
Comparison between SAC and MPC
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call