Abstract

The standard Backpropagation Neural Network (BPNN) Algorithm is widely used in solving many real problems in world. But the backpropagation suffers from different difficulties such as the slow convergence and convergence to local minima. Many modifications have been proposed to improve the performance of the algorithm such as careful selection of initial weights and biases, learning rate, momentum, network topology and activation function. This paper will illustrate a new additional version of the Backpropagation algorithm. In fact, the new modification has been done on the error signal function by using deep neural networks with more than one hidden layers. Experiments have been made to compare and evaluate the convergence behavior of these training algorithms with two training problems: XOR, and the Iris plant classification. The results showed that the proposed algorithm has improved the classical Bp in terms of its efficiency.

Highlights

  • An artificial neural network, ANN, is a software system that loosely models biological neurons

  • The proposed algorithm improve the performance of the Optical Backpropagation algorithm (OBP) on deep neural network, the experimental results show that the proposed algorithm converges to a reasonable range of error after a few number of training epochs

  • For a neural network with 3 units for input layer, 3 hidden layers with 3, 4 and 7 nodes for each respectively, and 2 units for output layer, the final weights from input to the first hidden layer and from the first hidden layer to the second hidden after using the Extended Optical Bp (EOBP) and the BP are summarized in table 4.7

Read more

Summary

1.INTRODUCTION

ANN, is a software system that loosely models biological neurons It consists of small processing units known as Artificial Neurons, which can be trained to perform complex calculations. The proposed algorithm improve the performance of the Optical Backpropagation algorithm (OBP) on deep neural network, the experimental results show that the proposed algorithm converges to a reasonable range of error after a few number of training epochs. For a given set of input patterns applied to the first layer in the neural network, it propagated through each upper layer until an output is generated. This output is compared to the known and desired output and the error value is calculated. Algorithm for a 3-layer network with m input units, n hidden units, and p output units can be described as follows [2, 7]: 1. Initialize network weights (often small random values)

10. Update weights on the Hidden layer:
The EOBP Steps
EXPERIMENTAL EVALUATION
Iris Plant Classification
Compare Two Results Using Different Neural Network Architecture
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.