Abstract

The potential of artificial upwelling in stimulating seaweed growth, consequently enhancing ocean carbon sequestration has been gaining increasing attention in recent years. This has led to the development of the first solar-powered and air-lifted artificial upwelling system (AUS) in China. However, effective scheduling of the air injection system and energy storage system in dynamic marine environments remains a crucial challenge in the operation of the AUS, as it holds the potential to significantly improve system performance. To tackle this challenge, we propose a novel energy management approach that utilizes the deep reinforcement learning (DRL) algorithm to determine the optimal operational parameters of the AUS at each time interval. Specifically, we formulate the energy optimization problem as a Markov decision process and integrate the quantile network in distributional reinforcement learning with the deep dueling network to solve the problem. Through extensive simulations, we evaluate the performance of our algorithm and demonstrate its superior effectiveness over traditional rule-based approaches and other DRL algorithms in enhancing energy utilization while ensuring the secure and reliable operation of the AUS. Our findings suggest that a DRL-based approach offers a promising way to provide valuable guides for the operation of the AUS and enhance the sustainability of seaweed cultivation and carbon sequestration in the ocean.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call