Abstract

In recent years, a real-time control method based on deep reinforcement learning (DRL) has been developed for urban combined sewer overflow (CSO) and flooding mitigation and is more advantageous than traditional methods in the context of urban drainage systems (UDSs). Since current studies mainly focus on analyzing the feasibility of DRL methods and comparing them with traditional methods, there is still a need to optimize the design and cost of DRL methods. In this study, state selection and cost estimation are employed to analyze the influence of the different states on the performance of DRL methods and provide relevant suggestions for practical applications. A real-world combined UDS is used as an example to develop DRL models with different states. Their control effect and data monitoring costs are then compared. According to the results, the training process for DRL is difficult when using fewer nodes information or water level as the input state. Using both upstream and downstream nodes information as input improves the control effect of DRL. Also, using the information on upstream nodes as the input state is more effective than using downstream nodes; using flow as input is more likely to have a better control effect than using water level, while using both flow and water level cannot significantly further improve the control effect. Because the cost of flow monitoring is higher than water level monitoring, the number of monitoring nodes and the use of flow/water level need to be balanced based on cost-effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call