Abstract
This work proposes a novel deep reinforcement learning (DRL) based control framework for greenhouse climate control. This framework utilizes a neural network to approximate state-action value estimation. The neural network is trained by adopting a Q-learning based approach for experience collection and parameter updates. Continuous action spaces are effectively handled by the proposed approach by extracting optimal actions for a given greenhouse state from the neural network approximator through stochastic gradient ascent. Analytical gradients of the state-action value estimate are not required but can be computed effectively through backpropagation. We evaluate the performance of our DRL algorithm on a semi-closed greenhouse simulation located in New York City. The obtained computational results indicate that the proposed Q-learning based DRL framework yields higher cumulative returns. They also demonstrate that the proposed control technique consumes 61% lesser energy than deep deterministic policy gradient (DDPG) method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.