Abstract

Non-technical losses (NTL) constitute a major issue in many countries. NLT can be considered as a bad data detection problem. Thus, classical approaches like the weighted least square method and statistical tests can be used to detect and identify bad data regarded from NTL. Classical approaches are suitable tools when the topology of the network and its parameters are known. While this assumption is widely accepted in transmission grids, it is not the case in distribution grids, where grid reconfiguration is common, and parameters have a significant dependence on the ambient conditions. In this paper, we leverage the latest advances in mathematical and computational tools to detect NTL in distribution grids. Thus, NTL detection can be implemented in an automated system that does not require human interaction. We use off-the-shelf machine learning algorithms for dealing with it. In particular, we introduce a new architecture that combines different types of deep neural networks, such as convolutional and recurrent neural networks. A thoughtful set of simulations over a realistic dataset is performed and compared with other model-free machine-learning approaches, namely, support vector machine, random forest, and gradient boosted trees.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call