Abstract

In load predication, point-based forecasting methods have been widely applied. However, uncertainties arising in load predication bring significant challenges for such methods. This therefore drives the development of new methods amongst which interval predication is one of the most effective. In this study, a deep belief network-based lower–upper bound estimation (LUBE) approach is proposed, and a genetic algorithm is applied to reinforce the search ability of the LUBE method, instead of simulated an annealing algorithm. The approach is applied to the short-term load prediction on some realistic electricity load data. To demonstrate the effectiveness and efficiency of the proposed method, it is compared with three state-of-the-art methods. Experimental results show that the proposed approach can significantly improve the predication accuracy.

Highlights

  • Load prediction plays an important role in the planning of power systems, building reliable power systems and so on

  • The proposed deep belief network (DBN)-based lower–upper bound estimation (LUBE) method was compared with three best-inclass prediction models, i.e., the Elman model [39], the nonlinear autoregressive exogenous (NARX)

  • The proposed DBN-based LUBE method was compared with three best-in-class

Read more

Summary

Introduction

Load prediction plays an important role in the planning of power systems, building reliable power systems and so on. There has been a number of studies proposed for STLP These methods can be loosely categorized as point predication and interval prediction. The upper and lower bounds in interval predication can highly cover the fallen objectives, but they provide an accurate coverage probability as an indication, which obviously brings more quantitative information than point. The delta method adopts a nonlinear regression technique to enhance the generalization performance of the neural network (NN) models [15]. The method linearized the neural network model by a set of parameters generated by minimizing the sum of the squared error cost functions. Bayesian techniques are used to train neural networks, and they allow the predicted value to have a certain error range [17]. The low empirical coverage probability is the biggest drawback of this approach

Method
Evaluation Metrics of Interval Prediction
LUBE Approach
Single-Objective
Pre-Training Process
Fine-Tuning Process
Model Implementation
Preprocessing of Data
January
Parameter Settings
Results Analysis
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call