Abstract
Model fitting is a fundamental component in computer vision for salient data selection, feature extraction and data parameterization. Conventional approaches such as the RANSAC family show limitations when dealing with data containing multiple models, high percentage of outliers or sample selection bias, commonly encountered in computer vision applications. In this paper, we present a novel model evaluation function based on Gaussian-weighted Jensen–Shannon divergence, and integrate into a particle swarm optimization (PSO) framework using ring topology. We avoid two problems from which most regression algorithms suffer, namely the requirements to specify inlier noise scale and the number of models. The novel evaluation method is generic and does not require any estimation of inlier noise. The continuous and meta-heuristic exploration facilitates estimation of each individual model while delivering the number of models automatically. Tests on datasets comprised of inlier noise and a large percentage of outliers (more than 90 % of the data) demonstrate that the proposed framework can efficiently estimate multiple models without prior information. Superior performance in terms of processing time and robustness to inlier noise is also demonstrated with respect to state of the art methods.
Highlights
Introduction and related workModel fitting techniques are widely used in computer vision applications, in which the data to be modeled usually contains a significant number of outliers
The kernel function-based approaches estimate the residual histogram density using a smoothness parameter, and they can tackle small pseudo-structures, but instead they have the well-known density problem, i.e. over- or under-estimation of the proportion of inliers. Both of these two categories of hypothesis evaluation function still require parameters, these parameters are less sensitive than the inliner noise scale used in RANSAC
Our experimental results demonstrate the superior performance in terms of better accuracy and stability under various challenging configurations, whilst no prior information such as number of the models, inlier noise scale and outlier rate has been provided before hand
Summary
Introduction and related workModel fitting techniques are widely used in computer vision applications, in which the data to be modeled usually contains a significant number of outliers. This attractive capability brings the requirement of a application-dependent parameter, i.e. the inlier noise scale ( referred to as bandwidth or truncation threshold) Since this prior information usually is not available for many practical computer vision applications, the development of nonparametric cost functions for robust estimators has received widespread attention in recent literature [4,5,6,7,8,9]. The kernel function-based approaches estimate the residual histogram density using a smoothness parameter, and they can tackle small pseudo-structures, but instead they have the well-known density problem, i.e. over- or under-estimation of the proportion of inliers Both of these two categories of hypothesis evaluation function still require parameters (kurtosis or skewness threshold for histogram approaches and bandwidth of the kernel approaches), these parameters are less sensitive than the inliner noise scale used in RANSAC
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.