Abstract

The Energy Router (ER) concept as a compact intelligent power electronic device has been proposed as the core of the Internet of Energy (IoE), maximizing energy efficiency, minimizing loss and costs, and addressing growing electricity demand. However, optimizing electricity routing in the residential sector has not been well investigated. Moreover, complex modeling of the energy components besides the uncertain environment made the conventional methods impotent in tackling these problems. Consequently, this research proposes a novel algorithm titled Approximate Reasoning Reward-based Adaptable Deep Double Q-Learning (A2R-ADDQL) that is introduced specially to optimize electricity routing in residential units. As a result, both overestimation and underestimation biases are reduced comparing other deep Q-Learning-based algorithms. Moreover, the sample complexity of the model is decreased due to utilizing a fuzzy approximate reasoning reward function. Ultimately, the proposed algorithm is assessed on a real-world dataset evaluating the findings in several benchmarks. The results indicate that the proposed model is unbiased while convergence speed is higher than other analyzed techniques. Additionally, monthly average cost and power loss are lowered by 24.9% and 29.1% more than other techniques. Finally, the proposed algorithm reduces greenhouse gases emission by 3.91 kg per month.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call