Poster presented at Vision Science Society Annual Meeting, May 2014, St. Pete Beach, FL A growing interest in individual differences in face and object recognition motivates the development of visual learning tests such as the Cambridge Face Memory Test (Duchaine & Nakayama, 2006) and the Vanderbilt Expertise Test (VET; McGugin et al., 2012). But experience with a category may also result in non-perceptual knowledge. We seek a reliable and valid measure of non-perceptual semantic knowledge in a standard format for a variety of object categories (rather than one category, Barton et al. 2009; Van Gulick & Gauthier, VSS2013) that applies to the full range of expertise in a domain. The Semantic Vanderbilt Expertise Test (SVET) focuses on one aspect of semantic knowledge that can be measured across categories: acquisition of relevant nomenclature. Each trial consists of a triplet of names, one real name of an object in that category and two foils. The SVET 1.0 includes 7 categories: birds, cars, dinosaurs, planes, shoes, Transformers, and trees. Through multiple iterations of data collected on Amazon Mechanical Turk test items were fine-tuned based on factor analysis, classical item analyses, and item response theory. In data from samples of 96-101 subjects per category, all tests showed good internal consistency (Cronbach’s alpha). We validated each SVET against subjects’ self-reports of their category knowledge, assessed with 7 domain-specific questions (Gauthier et al., submitted). Across all categories, subjects’ general rating of their category experience was most strongly correlated with SVET performance (r=.38) followed by their rating of how detailed an essay they could write about the category (r=.35). Interestingly, just like perceptual performance on the VET, the SVET correlation with age and gender differed across categories, suggesting a role for experience. The SVET can provide assessment of semantic knowledge to complement visual measures, and to help understand how performance is determined by the interaction of perceptual and cognitive abilities with experience. This work is supported by NSF (SBE-0542013), the Vanderbilt Vision Research Center (P30-EY008126), and the National Eye Institute (R01 EY013441).
Read full abstract