Abstract

Robust extraction of consensus sets from noisy data is a fundamental problem in robot vision. Existing multimodel estimation algorithms have shown success on large consensus sets estimations. One remaining challenge is to extract small consensus sets in cluttered multimodel data set. In this article, we present an effective multimodel extraction method to solve this challenge. Our technique is based on smallest consensus set random sampling, which we prove can guarantee to extract all consensus sets larger than the smallest set from input data. We then develop an efficient model competition scheme that iteratively removes redundant and incorrect model samplings. Extensive experiments on both synthetic data and real data with high percentage of outliers and multimodel intersections demonstrate the superiority of our method.

Highlights

  • Robust extraction of consensus sets (CSs) from noisy data is a fundamental problem in computer vision

  • Our technique is based on smallest CS random sampling, which we prove can guarantee to extract all CS larger than the smallest set from input data

  • We find that over 95% of the total computation time is used for generating sufficient model hypotheses via random sampling and only 5% of the time is used for model competition

Read more

Summary

Introduction

Robust extraction of consensus sets (CSs) from noisy data is a fundamental problem in computer vision. We develop an efficiently model competition scheme that iteratively removes redundant and incorrect model samplings Extensive experiments on both synthetic data and real data demonstrate that our approach can handle unknown model number, a high percentage of outliers, large model size variations, and multimodel intersections. Our step is to extract the global optimal models iteratively from this data set Recall that algorithms such as J-Linkage[10] propose to merge models with small distance. If the inliers ratio of the selected model M is higher than the given threshold S!, this model is more likely to be sampled more than once during the initial times sampling while many possible non-optimal redundant model hypotheses may coexist within the model set fMjgj1⁄41;:::;. Input: data points, model confidence P confidence, and mini inliers ratio threshold S!

Model competition
Experiments
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call