Abstract

In two experiments we employed calibration methods to investigate the realism of participants' confidence ratings of their own classification performance based on knowledge acquired after training on an artificial grammar. In Experiment 1 participants showed good realism (but overconfidence) for grammatical strings but very poor realism for non-grammatical strings. Method of training (string repetition in writing or mere exposure) did not affect the realism. Furthermore, the participants underestimated their overall performance. In Experiment 2, using a more complex grammar and controlling for two types of associative chunk-strength, participants showed good realism (but still overconfidence) for both letter and symbol strings, irrespective of grammaticality. Together, these experiments show that implicit learning can give rise to knowledge products that are associated with fairly realistic meta-knowledge. It is argued that both the zero-correlation criterion and the guessing criterion are misplaced when ...

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.