Abstract

Taylor Models present the polynomial generalization of the simple interval approach for rigorous computations of differential equations suggested by Martin Berz. These models are used to obtain better estimates for guaranteed enclosures of solutions. The authors of the recent papers describing the mathematical framework suggest some improvements of the models. These models are assumed to be applied to high dimensional systems of equations. Therefore, a parallel version of these models is required. In this paper improved Taylor models (TM) are implemented on the architecture of the general-purpose graphics processing units (GPU). Algebraic operations and algorithms are implemented with a particular optimization for the computational architecture, namely: addition, multiplication, convolution to two TMs, substitution of independent variable, integration with respect to a variable and boundary interval estimations. For this purpose interval arithmetic is implemented using intrinsic functions. The multiple GPU version is also implemented and its scalability is verified. In terms of acceleration various test examples are presented. The reduction operation is discovered to the bottleneck of the GPU performance. Using the GPU version for sufficiently large problem dimensions is suggested.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.