Abstract

AbstractInterval methods for ODEs often face two obstacles in practical computations: the dependency problem and the wrapping effect. Taylor model methods, which have been developed by Berz and his group, have recently attracted attention. By combining interval arithmetic with symbolic calculations, these methods suffer far less from the dependency problem than traditional interval methods for ODEs. By allowing nonconvex enclosure sets for the flow of a given initial value problem, Taylor model methods have also a high potential for suppressing the wrapping effect.Makino and Berz [1] advocate the so‐called blunting method. In this paper, we analyze the blunting method (as an interval method) for a linear model ODE. We compare its convergence behavior with that of the well‐known QR interval method. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call