Abstract

This paper evaluates two strategies, deep reinforcement learning (DRL) and model predictive control (MPC), for maximizing harnessed power from a lifting surface controlled ocean current turbine (OCT) through depth optimization. To address spatiotemporal uncertainties in the ocean current, an online Gaussian Process (GP) is applied, where the prediction error of the ocean current speed is also modeled. We compare the performance of the MPC-based optimization with the DRL-based algorithm (i.e., deep Q-networks (DQN)) using over one week of field collected acoustic doppler current profiler (ADCP) data. The DRL-based algorithm is almost equivalent to the MPC-based algorithm in real-time optimization when the ocean current speed prediction is perfect. However, the performance of the DQN-based algorithm surpasses the MPC-based algorithm when ocean current prediction error is considered. The importance of using the DQN in improving the error-tolerance of the proposed spatiotemporal optimization is verified through the comparative results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.