Abstract

This chapter discusses the Euler method, which is a first-order numerical procedure for solving ordinary differential equations with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is a first-order method, which means that the local error or error per step is proportional to the square of the step size, and the global error or error at a given time is proportional to the step size. The Euler method often serves as the basis to construct more complicated methods. At the basis of all trigonometry lie the two addition theorems: sin (α + β) = sin α cos β + cos α sin β and cos (α + β) = cos α cos β − sin α sin β. However, with the aid of the relationship between the sine and cosine, these addition theorem can be rewritten in such a way that the value of sin (α + β) is expressed only in terms of sin α and sin β and the value of cos (α + β) is expressed only in terms of cos α and cos β.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call