Abstract
Since the Lucas-Kanade algorithm was proposed in 1981 image alignment has become one of the most widely used techniques in computer vision. Applications range from optical flow and tracking to layered motion, mosaic construction, and face coding. Numerous algorithms have been proposed and a wide variety of extensions have been made to the original formulation. We present an overview of image alignment, describing most of the algorithms and their extensions in a consistent framework. We concentrate on the inverse compositional algorithm, an efficient algorithm that we recently proposed. We examine which of the extensions to Lucas-Kanade can be used with the inverse compositional algorithm without any significant loss of efficiency, and which cannot. In this paper, Part 1 in a series of papers, we cover the quantity approximated, the warp update rule, and the gradient descent approximation. In future papers, we will cover the choice of the error function, how to allow linear appearance variation, and how to impose priors on the parameters.
Highlights
Image alignment consists of moving, and possibly deforming, a template to minimize the difference between the template and an image
We examine which of the extensions to Lucas-Kanade can be applied to the inverse compositional algorithm without any significant loss of efficiency, and which extensions require additional computation
In Sections 3.1.5 and 3.2.5 we showed that the inverse compositional algorithm was equivalent to the Lucas-Kanade algorithm
Summary
Image alignment consists of moving, and possibly deforming, a template to minimize the difference between the template and an image. One difference between the various approaches is whether they estimate an additive increment to the parameters (the additive approach (Lucas and Kanade, 1981)), or whether they estimate an incremental warp that is composed with the current estimate of the warp (the compositional approach (Shum and Szeliski, 2000)). Another difference is whether the algorithm performs a Gauss-Newton, a Newton, a steepest-descent, or a Levenberg-Marquardt approximation in each gradient descent step.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.