Abstract

The multilayer perceptron has a large wide of classification and regression applications in many fields: pattern recognition, voice and classification problems. But the architecture choice has a great impact on the convergence of these networks. In the present paper we introduce a new approach to optimize the network architecture, for solving the obtained model we use the genetic algorithm and we train the network with a back-propagation algorithm. The numerical results assess the effectiveness of the theoretical results shown in this paper, and the advantages of the new modeling compared to the previous model in the literature.

Highlights

  • IN recent years, neural networks have attracted considerable attention as they proved to be essential in applications such as contentaddressable memory, pattern recognition and optimization.Learning or training of ANN is equivalent to finding the values of all weights such that the desired output is generated to corresponding input, it can be viewed as the minimization of error function computed by the difference between the output of the network and the desired output of a training observations set [1].Multilayer Perceptron is the most utilized model in neural network applications using the back-propagation training algorithm

  • Learning for the Multilayer Perceptron (MLP) is the process to adapt the connections weights in order to obtain a minimal difference between the network output and the desired output, for this raison in the literature some algorithm are used such as Ant colony [11] but the most used called Back- propagation witch based on descent gradient techniques [12]

  • M.C: Misclassified data, Connect (%): percentage of connections weights used in the network between hidden layers, Tr.D: Training Data, Tes.D: Testing Data

Read more

Summary

Introduction

IN recent years, neural networks have attracted considerable attention as they proved to be essential in applications such as contentaddressable memory, pattern recognition and optimization. Optimizing the number of connection and hidden layer for establishing a multilayer Perceptron to solve the problem remains one of the unsolved tasks in this research area Multilayer Perceptron consists of input layer, output layer and hidden layers between these two layers. We optimize the number of hidden layers and the number of neurons in each hidden layer and process of to deal with a few connection to increase the speed and efficiency of the neural network. We model this problem of neural architecture in terms of a mixed-integer non-linear problem with non-linear constraints. Experimental results are given in the section 4

Related Works
Units Feed-Forward Neural Networks for Pattern Classification
Multilayer Perceptron
Back-propagation and Learning for de MLP
Proposed Model to Optimize the MLP Weights and Architectures
Implementation And Numerical Results
Parameters setting
Results for the optimization methodology
Proposed Method
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call