Abstract

The evaluation of disparity (range) maps includes the selection of an objective image quality (or error) measure. Among existing measures, the percentage of bad matched pixels is commonly used. However, it requires a disparity error tolerance and ignores the relationship between range and disparity. In this research, twelve error measures are characterized in order to provide the bases to select accurate stereo algorithms during the evaluation process. Adaptations of objective quality measures for disparity maps’ accuracy evaluation are proposed. The adapted objective measures operate in a manner similar to the original objective measures, but allow special handling of missing data. Additionally, the adapted objective measures are sensitive to errors in range and surface structure, which cannot be measured using the bad matched pixels. Their utility was demonstrated by evaluating a set of 50 stereo disparity algorithms known in the literature. Consistency evaluation of the proposed measures was performed using the two conceptually different stereo algorithm evaluation methodologies—ordinary ranking and partition and grouping of the algorithms with comparable accuracy. The evaluation results showed that partition and grouping make a fair judgment about disparity algorithms’ accuracy.

Highlights

  • A stereo correspondence algorithm uses a stereo image pair as an input and produces an estimated disparity map as an output [1,2]

  • The results vary according to the choice of the objective algorithm, where the AdaptWeight algorithm provides disparity estimations that are closer to the ground-truth data than the TreeDP disparity map calculations, which is valid for the previously described Structural SIMilarity Index (SSIM)-based objective measures

  • In order to perform image disparity algorithm evaluation, two methodologies were applied: Middlebury’s methodology and the A∗ groups methodology. The ranking of these algorithms using pixel-based objective measures was done by sorting the average ranks obtained for different stereo image pairs (Tsukuba, Venus, Teddy and Cones) and criteria; this means that for the 12 rank values’ averaging while using SSIM-based and proposed local-based objective measures, the algorithms ranking was done by sorting the average ranks obtained for the four stereo image pairs

Read more

Summary

Introduction

A stereo correspondence algorithm uses a stereo image pair as an input and produces an estimated disparity map (a new image) as an output [1,2]. The accuracy of stereo correspondence algorithms can be assessed by the evaluation of disparity maps, either qualitatively or quantitatively. A quantitative (or objective) approach is robust against several human-related biasing factors, offering advantages over a qualitative (subjective) approach This assessment has practical applications such as component and procedure comparison, parameter tuning, supports decision-making by researchers and practitioners, and in general, to measure the progress in the field. The results of objective measures are used as inputs for quantitative evaluation methodologies, which aim to evaluate the performance of stereo algorithms. Among the existing quantitative evaluation methodologies, Middlebury’s methodology is commonly used [12] It uses the percentage of BMP as the error measure. The section will describe the methodologies and dataset used for disparity maps’ evaluation This is followed by the sections for experimental evaluation of known and the newly proposed objective error measures.

Methodologies and Dataset Used for Disparity Maps’ Evaluation
Dataset Description
Middlebury’s Methodology
Disparity Evaluation Using Objective Measures
Disparity Evaluation Using Pixel-Based Objective Measures
Disparity Evaluation Using SSIM-Based Objective Measures
A New Local-Based Objective Measures for Disparity Evaluation
Results
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.