Abstract

In this article we present a new adaptive algorithm for solving 2D interpolation problems of large scattered data sets through the radial basis function partition of unity method. Unlike other time-consuming schemes this adaptive method is able to efficiently deal with scattered data points with highly varying density in the domain. This target is obtained by decomposing the underlying domain in subdomains of variable size so as to guarantee a suitable number of points within each of them. The localization of such points is done by means of an efficient search procedure that depends on a partition of the domain in square cells. For each subdomain the adaptive process identifies a predefined neighborhood consisting of one or more levels of neighboring cells, which allows us to quickly find all the subdomain points. The algorithm is further devised for an optimal selection of the local shape parameters associated with radial basis function interpolants via leave-one-out cross validation and maximum likelihood estimation techniques. Numerical experiments show good performance of this adaptive algorithm on some test examples with different data distributions. The efficacy of our interpolation scheme is also pointed out by solving real world applications.

Highlights

  • In kernel based approximation radial basis function (RBF) methods are effective meshfree techniques, which can be implemented to numerically solve various types of science and engineering problems

  • In this work, we focus on a local RBF method, such as the radial basis function partition of unity method (RBF-PUM), which allows us to decompose a big problem into several small subproblems

  • In this subsection we describe how the data are organized in the two-dimensional space to select the nodes belonging to the various subdomains in the RBF-PUM based interpolation

Read more

Summary

Introduction

In kernel based approximation radial basis function (RBF) methods are effective meshfree techniques, which can be implemented to numerically solve various types of science and engineering problems. All these issues deserve to be considered when one has to model (systems of) partial differential equations (PDEs), and when a multivariate problem of scattered data fitting needs to be faced and solved appropriately This problem is relevant in several situations in which surface reconstruction involves unstructured large data sets, requiring in some way the construction of adaptive interpolation or approximation methods. Approximating scattered data of high complexity arises in various areas of applied sciences, ranging from scanner acquisitions to geographic benchmarks, as well as for industrial and medical purposes where the processing of large random data configurations is usually carried out In such cases scattered data can have significantly different distributions, e.g. data with highly varying density or data with voids, which demand adaptive algorithms In such cases scattered data can have significantly different distributions, e.g. data with highly varying density or data with voids, which demand adaptive algorithms (see e.g. [8,13])

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call