Abstract

A Multi-Layer Perceptron (MLP) defines a family of artificial neural networks often used in TS modeling and forecasting. Because of its "black box" aspect, many researchers refuse to use it. Moreover, the optimization (often based on the exhaustive approach where "all" configurations are tested) and learning phases of this artificial intelligence tool (often based on the Levenberg-Marquardt algorithm; LMA) are weaknesses of this approach (exhaustively and local minima). These two tasks must be repeated depending on the knowledge of each new problem studied, making the process, long, laborious and not systematically robust. In this paper a pruning process is proposed. This method allows, during the training phase, to carry out an inputs selecting method activating (or not) inter-nodes connections in order to verify if forecasting is improved. We propose to use iteratively the popular damped least-squares method to activate inputs and neurons. A first pass is applied to 10% of the learning sample to determine weights significantly different from 0 and delete other. Then a classical batch process based on LMA is used with the new MLP. The validation is done using 25 measured meteorological TS and cross-comparing the prediction results of the classical LMA and the 2-stage LMA.

Highlights

  • The primary goal of time series (TS) analysis is forecasting, i.e. using the past to predict the future [1].This formalism is used in many scientific fields like econometrics, seismology or meteorology

  • Seven runs are operated, so 175 manipulations are performed with the pruned methodology describe above and the standard approach

  • The points positioned in the upper zone are related to “pMLP is better than Multi-Layer Perceptron (MLP)” cases are plotted

Read more

Summary

Introduction

The primary goal of time series (TS) analysis is forecasting, i.e. using the past to predict the future [1]. This formalism is used in many scientific fields like econometrics, seismology or meteorology. A Multi-Layer Perceptron (MLP) defines a family of functions often used in TS modeling [1]. In this model, neurons are grouped in layers and only forward connections exist. A typical MLP consists of an input, hidden and output layers, including neurons, weights and a transfer functions. Each neuron (noted i) transforms the weighted sum (weight wij, bias bi) of inputs (xj) into an output

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.