Abstract
Random Sample Consensus (RANSAC) is an iterative algorithm for robust model parameter estimation from observed data in the presence of outliers. First proposed by Fischler and Bolles back in 1981, it still is a very popular algorithm in the computer vision community. The primary objective of their paper was to find an effective strategy for excluding outliers from estimation process, but it did not consider the presence of noise among the inliers. A common practice among implementations of RANSAC is to take a few samples extra than the minimum required for estimation problem, but implications of this heuristic are lacking in the literature. In this paper, we present a probabilistic analysis of this common heuristic and explore the possibility of finding an optimal size for the randomly sampled data points per iteration of RANSAC. We also improve upon the lower bound for the number of iterations of RANSAC required to recover the model parameters. On the basis of this analysis, we propose an improvement in the hypothesis step of RANSAC algorithm. Since this step is shared (unchanged) by many of the variants of RANSAC, their performance can also be improved upon. The paper also presents the improvements achieved by incorporating the findings of our analysis in two of the popular variants of RANSAC.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.