Abstract

In this contribution the three various artificial neural networks are tested on CATS prediction benchmark. The results are compared and evaluated. Furthermore, these artificial neural networks are tested in model predictive control on the t-variant system. The aim of this paper is to present and compare artificial neural networks as interesting way how to model and predict nonlinear systems even with t-variant parameters. The key features of this paper are emphasis of the computational costs of the selected predictors and usage of adaptive linear network which offers short learning times and remarkable prediction error. INTRODUCTION The increasing demand on the quality, reliability, and economical profits leads to using of new modeling and control methods in the process industry. In past few decades the predictive control techniques have become very popular. One of the most used approaches is the Model Predictive Control (MPC) method (Camacho and Bordons 1995). The appropriate predictive model is a key question in nonlinear model predictive control. The predictive models can be divided into two main groups (Verdunen and Jong 2003): white box models and black box models. The white box modeling is established on a prior knowledge of mathematic description of basic physical rules of controlled process. White box models are excellent for process modeling and product development. The model constants have a physical meaning and are not dependent on process design. The main disadvantage of white box models is the time of development and higher complexity. Conversely, black box models such as artificial neural network (ANN) and fuzzy logic models are data-driven. They provide general method for process dynamics description from input-output data. First and foremost, the learning ability makes artificial neural networks versatile, user friendly and powerful tool for many practical applications (Hussain 1999). Many predictive control techniques based on MPC, which use artificial neural network as a predictor, are established on multilayer feed-forward neural networks (Hagan et al. 2002; Kanjilal 1995). In spite of the fact that the multilayer feed-forward neural networks (MFFNNs) have many advantages, such as simple design and scalability, they have also many drawbacks, such as long training times and choice of an appropriate learning stop time (the over-learning versus the early stopping). Nevertheless, there are quite a number of ANN types suitable for the modeling and prediction (Liu 2001; Meszaros et al. 1999; Chu et al. 2003). Moreover, features of these ANNs exceed abilities of the MFFNN in many cases. One of these ANNs is ADALINE (ADAptive LInear NEuron). What is more, ADALINE has one special feature – adaptivity. Owing to its simple structure it offers interesting way how to design adaptive neural predictor with reasonable computational demands. This paper is organized as follows: In the beginning multilayer feed-forward neural networks and adaptive linear networks are briefly introduced. Then the methodology of the simulations is explained, after that the results are presented and the paper is concluded by final remarks. MULTILAYER FEED-FORWARD NEURAL NETWORKS Multilayer feed-forward neural networks were derived by generalization from Rosenblatt’s perceptron, thus they are often called multilayer perceptrons (MLP). This type of artificial neural networks uses supervised training. One of the most known methods of supervised training is backpropagation algorithm; hence these ANNs are sometimes also called backpropagation networks. In the MFFNN the signals flow between the neurons only in the forward direction i.e. towards the output. Neurons in MFFNN are organized in layers and neurons of the certain layer can have inputs from any neurons of the earlier layer. The ability to predict of ANN is determined by capability of modeling of certain process. By applying the Kolmogorov theorem it was proved that for general function approximation is sufficient twolayer MFFNN (one hidden layer) if non-polynomial transfer functions are used and the hidden layer has enough neurons (Leshno et al. 1993). Proceedings 22nd European Conference on Modelling and Simulation ©ECMS Loucas S. Louca, Yiorgos Chrysanthou, Zuzana Oplatkova, Khalid Al-Begain (Editors) ISBN: 978-0-9553018-5-8 / ISBN: 978-0-9553018-6-5 (CD) Figure 1: Simplified Scheme of Two-layer MFFNN The two-layer MFFNN, which contains one output layer and one hidden layer, is depicted in the figure 1 (this structure is implemented in this paper). This MFFNN can be described by two equations:

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call