Abstract

Myung, Kim, and Pitt (2000) demonstrated that simple power functions almost always provide a better fit to purely random data than do simple exponential functions. This result has important implications, because it suggests that high noise levels, which are common in psychological experiments, may cause a bias favoring power functions. We replicate their result and extend it by showing strong bias for more realistic sample sizes. We also show that biases occur for data that contain both random and systematic components, as may be expected in real data. We then demonstrate that these biases disappear for two- or three-parameter functions that include linear parameters (in at least one parameterization). Our results suggest that one should exercise caution when proposing simple power and exponential functions as models of learning. More generally, our results suggest that linear parameters should be estimated rather than fixed when one is comparing the fit of nonlinear models to noisy data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.