Abstract

For naturally ventilated apartments equipped with air cleaners, it is essential to develop a controller that simultaneously controls the operation of a window and the air cleaner in order to mitigate indoor PM2.5 (particulate matter with aerodynamic diameter less than 2.5 μm) pollution with less energy consumption by the air cleaner. This investigation first employed the deep reinforcement learning approach to train a smart controller that minimizes the total economic loss due to PM2.5-related health risks and air cleaner energy consumption. The controller was trained offline in a virtual apartment constructed on the basis of a particle dynamics model with typical building parameters. The inputs required for the controller were the real-time indoor and outdoor PM2.5 concentrations, which could be measured by low-cost sensors. To test the trained deep Q-network (DQN) controller, a series of experiments were conducted in two laboratory chambers. Both the indoor PM2.5 concentrations and the operating time of the air cleaner were compared between the trained DQN controller and different benchmark controllers with various outdoor PM2.5 levels under different chamber conditions, in order to assess the controller performance. The trained DQN controller outperformed the benchmark controllers in reducing the total economic loss due to indoor PM2.5-related health risks and air cleaner energy consumption by 2.4%–43.7% for all 18 cases. Although the DQN controller was trained offline in a virtual apartment with typical building parameters, its performance was robust in the chamber experiments even when the parameters were very different from the typical values.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call