1. Introduction The common characteristic of all stock exchange indicators have the term of uncertainty, which is related with their short and long-term future state. This feature is undesirable for the investor but it is also unavoidable whenever the stock exchange indicator is selected as an investment tool. Among the best possible alterations that the researcher desires to make is to reduce this uncertainty. Stock exchange prediction or forecasting is one of the interesting issues in this process as it is also referred in other works (Helstrom T. & Holmstrom K., 1998, Tsibouris G. & Zeidenberg M., 1996, White H., 1993). Time series forecasting, or time series prediction, takes an existing series of data [x.sub.t-n],...,[x.sub.t-2], [x.sub.t-1], [x.sub.t] and forecasts the [x.sub.t+1], [x.sub.t+2],..., data values. The goal is to observe or model the existing data series to enable future unknown data values to be forecasted accurately. Examples of data series include financial data series (stocks, indices, rates, etc.), physically observed data series (sunspots, weather, etc.), and mathematical data series (Fibonacci sequence, integrals of differential equations, etc.). A neural network (Tsibouris G. & Zeidenberg M., 1996) is a computational model that is loosely based on the neuron cell structure of the biological nervous system. Given a training set of data, the neural network can learn the data with a learning algorithm; in this research the most common algorithm, back propagation, is used. Through back propagation, the neural network forms a mapping between inputs and desired outputs from the training set by altering weighted connections within the network. The origin of neural networks dates back to the 1940s. McCulloch and Pitts (1943) and Hebb (1949) researched networks of simple computing devices that could model neurological activity and learning within these networks, respectively. Later, the work of Rosenblatt (1962) focused on computational ability in perceptrons, or single-layer feed-forward networks. Proofs showing that perceptrons, trained with the Perceptron Rule on linearly separable pattern class data, could correctly separate the classes generated excitement among researchers and practitioners. This excitement waned with the discouraging analysis of perceptrons presented by Minsky and Papert (1969). The analysis pointed out that perceptrons could not learn the class of linearly inseparable functions. It also stated that the limitations could be resolved if networks contained more than one layer, however no effective training algorithm for multi-layer networks was available. Rumelhart, Hinton, and Williams (1986) revived interest in neural networks by introducing the generalized delta rule for learning by back propagation, which is today the most commonly used training algorithm for multi-layer networks. 2. Athens Stock Exchange Indicator Time Series The Athens Stock Exchange Indicator is presented as a signal x=x(t) as it shown in Figure 1. It covers data from the period 1998 to 2005. The sampling rate is [DELTA]t=1 day. [FIGURE 1 OMITTED] 3. Neural Network Constructions In order to predict the time series for the Athens Stock Exchange Indicator we construct a back propagation network (Wan A.E., 1990, Widrow B., and Lehr M., 1990, Rumelhart D., McClelland J. et al., 1986) that consists of 1 input layer 1 middle or hidden layer and 1 output layer. The input layer has 3 processing elements (PE) neurons; the middle layer has 2x3+1=7 neurons according to Kolmogorov theorem (Kolmogorov N. A., 1950, Blackwell D., 1947). We choose the input layer to have 3 inputs because of minimum embedding dimension 3, and the attractor is embedded at a three -dimensional phase space (Hanias P.M., Curtis G.P., and Thalassinos E.J., 2006). Beginning with the first value of the time series, the first set of inputs data is x1, x2, x3 and our outputs are x4, x5, x6, x7, x8, x9, x10, x11, x12. …
Read full abstract