Abstract

Photovoltaic (PV) modules convert renewable and sustainable solar energy into electricity. However, the uncertainty of PV power production brings challenges for the grid operation. To facilitate the management and scheduling of PV power plants, forecasting is an essential technique. In this paper, a robust multilayer perception (MLP) neural network was developed for day-ahead forecasting of hourly PV power. A generic MLP is usually trained by minimizing the mean squared loss. The mean squared error is sensitive to a few particularly large errors that can lead to a poor estimator. To tackle the problem, the pseudo-Huber loss function, which combines the best properties of squared loss and absolute loss, was adopted in this paper. The effectiveness and efficiency of the proposed method was verified by benchmarking against a generic MLP network with real PV data. Numerical experiments illustrated that the proposed method performed better than the generic MLP network in terms of root mean squared error (RMSE) and mean absolute error (MAE).

Highlights

  • Solar energy is considered to be one of the most renewable and sustainable energy resources

  • The objective function based on the pseudo-Huber loss for the training of the multilayer perception (MLP) is expressed in Equation (4): NQ

  • The hourly PV power production between 1 January 2012 and 31 December 2017 of a PV plant installed at the Andre Agassi Preparatory Academy Building B (36.19N, 115.16W, elevation of 620 m) in the USA was used to verify the efficacy of the proposed MLP for day-ahead hourly PV power forecasting

Read more

Summary

Introduction

Solar energy is considered to be one of the most renewable and sustainable energy resources. In the early stages of PV power forecasting, statistical models such as autoregressive moving average (ARMA) [9] and its variants [10] were frequently used. In Equations (1) and (2), X = (x1, x2, ...H, =xM)fhanωd hYX=T(+y1,byh2, ..., yQ) denote the M-dimensional and (Q1)dimensional model input and output, respectively; ω and b represent the weights and biases of the network, respectively; f denotes the acYtiv=atifoynωfuynHcTtio+nb; yand the subscripts h and y stand for t(h2e) hidden layer and output layer, respectively. YQ) denote the M-dimensional and Q-dimensional model input and output, respectively; ω and b represent the weights and biases of the network, respectively; f denotes the activation function; and the subscripts h and y stand for the hidden layer and output layer, respectively. The pseudo-Huber loss was applied to the training of the MLP

Pseudo-Huber Loss
Data Description
Numerical Results and Analysis
Method
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call