Abstract

This paper evaluates two strategies, deep reinforcement learning (DRL) and model predictive control (MPC), for maximizing harnessed power from a lifting surface controlled ocean current turbine (OCT) through depth optimization. To address spatiotemporal uncertainties in the ocean current, an online Gaussian Process (GP) is applied, where the prediction error of the ocean current speed is also modeled. We compare the performance of the MPC-based optimization with the DRL-based algorithm (i.e., deep Q-networks (DQN)) using over one week of field collected acoustic doppler current profiler (ADCP) data. The DRL-based algorithm is almost equivalent to the MPC-based algorithm in real-time optimization when the ocean current speed prediction is perfect. However, the performance of the DQN-based algorithm surpasses the MPC-based algorithm when ocean current prediction error is considered. The importance of using the DQN in improving the error-tolerance of the proposed spatiotemporal optimization is verified through the comparative results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call