Abstract
An important aspect of perceptual learning involves understanding how well individuals can perceive distances, sizes, and time-to-contact. Oftentimes, the primary goal in these experiments is to assess participants' errors (i.e., how accurately participants perform these tasks). However, the manner in which researchers have quantified error, or task accuracy, has varied. The use of different measures of task accuracy, to include error scores, ratios, and raw estimates, indicates that the interpretation of findings depends on the measure of task accuracy utilized. In an effort to better understand this issue, we used a Monte Carlo simulation to evaluate five dependent measures of accuracy: raw distance judgments, a ratio of true to estimated distance judgments, relative error, signed error, and absolute error. We simulated data consistent with prior findings in the distance perception literature and evaluated how findings and interpretations vary as a function of the measure of accuracy used. We found there to be differences in both statistical findings (e.g., overall model fit, mean square error, Type I error rate) and the interpretations of those findings. The costs and benefits of utilizing each accuracy measure for quantifying accuracy in distance estimation studies are discussed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.