Abstract

Reinforcement learning (RL) has been used in real-time control of urban drainage system (UDS) for flooding mitigation, achieving a milestone in urban water management. However, RL can only guarantee an optimization control, rather than keep the control trajectory safe and trustworthy. Therefore, unacceptable risk still exists when handing over the real-world control process to an RL agent. Although safe learning is effective in enhancing RL’s safety, it cannot be applied directly due to the lack of quantitative framework of RL’s safety in UDS context. This study conducts three tasks to investigate and improve the safety of RL in UDS. First, a metric framework of RLs’ safety in the context of UDS is provided through a mathematic description. Then, it is plugged into safe learning methods to improve RLs’ safety in UDS. After that, a systemic uncertainty analysis is employed to evaluate the robustness of RL. The results of the case study indicate that (i) all the RLs show a promising result in flooding mitigation; (ii) safe learning helps RLs achieve a safer control process with a lower average water level and lower frequency of orifices operation; (iii) the robustness of RLs in UDS is influenced by the volume of rainfalls, the degree of randomness, and the type of RLs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call