Abstract

This research work investigates the use of Artificial Neural Network (ANN) based on models for solving first and second order linear constant coefficient ordinary differential equations with initial conditions. In particular, we employ a feed-forward Multilayer Perceptron Neural Network (MLPNN), but bypass the standard back-propagation algorithm for updating the intrinsic weights. A trial solution of the differential equation is written as a sum of two parts. The first part satisfies the initial or boundary conditions and contains no adjustable parameters. The second part involves a feed-forward neural network to be trained to satisfy the differential equation. Numerous works have appeared in recent times regarding the solution of differential equations using ANN, however majority of these employed a single hidden layer perceptron model, incorporating a back-propagation algorithm for weight updation. For the homogeneous case, we assume a solution in exponential form and compute a polynomial approximation using statistical regression. From here we pick the unknown coefficients as the weights from input layer to hidden layer of the associated neural network trial solution. To get the weights from hidden layer to the output layer, we form algebraic equations incorporating the default sign of the differential equations. We then apply the Gaussian Radial Basis function (GRBF) approximation model to achieve our objective. The weights obtained in this manner need not be adjusted. We proceed to develop a Neural Network algorithm using MathCAD software, which enables us to slightly adjust the intrinsic biases. We compare the convergence and the accuracy of our results with analytic solutions, as well as well-known numerical methods and obtain satisfactory results for our example ODE problems.

Highlights

  • The beginning of Neuro-computing is often taken to be the research article of McCulloch and Pitts [1] published in 1943, which showed that even simple types of neural networks could, in principle, compute any arithmetic or logical function, was widely read and had great influence

  • This research work investigates the use of Artificial Neural Network (ANN) based on models for solving first and second order linear constant coefficient ordinary differential equations with initial conditions

  • We present a new perspective for obtaining solutions of initial value problems using Artificial Neural Networks (ANN)

Read more

Summary

Introduction

The beginning of Neuro-computing is often taken to be the research article of McCulloch and Pitts [1] published in 1943, which showed that even simple types of neural networks could, in principle, compute any arithmetic or logical function, was widely read and had great influence. We present a new perspective for obtaining solutions of initial value problems using Artificial Neural Networks (ANN). We discover that neural network based model for the solution of ordinary differential equations (ODE) provides a number of advantages over standard numerical methods. The neural network based solution is differentiable and is in closed analytic form. The neural network based method for solving a differential equation provides a solution with very good generalization properties. The major advantage here is that our method reduces considerably the computational complexity involved in weight updating, while maintaining satisfactory accuracy

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call