Abstract

Parametric chamfer alignment (PChA) is commonly employed for aligning an observed set of points with a corresponding set of reference points. PChA estimates optimal geometric transformation parameters that minimize an objective function formulated as the sum of the squared distances from each transformed observed point to its closest reference point. A distance transform enables efficient computation of the (squared) distances, and the objective function minimization is commonly performed via the Levenberg–Marquardt (LM) nonlinear least squares iterative optimization algorithm. The point-wise computations of the objective function, gradient, and Hessian approximation required for the LM iterations make PChA computationally demanding for large-scale datasets. We propose an acceleration of the PChA via a parallelized and pipelined realization that is particularly well suited for large-scale datasets and for modern GPU architectures. Specifically, we partition the observed points among the GPU blocks and decompose the expensive LM calculations in correspondence with the GPU’s single-instruction multiple-thread architecture to significantly speed up this bottleneck step for PChA on large-scale datasets. Additionally, by reordering computations, we propose a novel pipelining of the LM algorithm that offers further speedup by exploiting the low arithmetic latency of the GPU compared with its high global memory access latency. Results obtained on two different platforms for both 2D and 3D large-scale point datasets from our ongoing research demonstrate that the proposed PChA GPU implementation provides a significant speedup over its single CPU counterpart.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.