Abstract

Similarity governs our perception and reasoning, helping us to relate new stimuli to their long-established category labels and to generalize learned behaviors to novel situations. Similarity has often been explained as arising from commonality of features and parts (Attneave, 1950; Tversky & Hemmenway, 1984) and as a defining metric for categorization processes (Ashby & William 1991; McClelland & Rogers 2004). However, it remains unclear how feature-based (implicit) and explicit components of similarity combine to give rise to perceptual similarity for real-world objects. Here, we collected pairs of explicit unconstrained ('How similar are these two animals?') and dimension-cued similarity judgments, as well as feature ratings used to derive an implicit measure of similarity, for ten basic-level animals across twelve similarity dimensions (six objective: e.g. size; six subjective: e.g. cuteness), presented as either text labels or short videos. Participants' reported explicit similarity judgments were virtually unaffected by presentation modality (r=0.95). Feature-based similarity significantly predicted dimension-cued similarity (top half dimensions: r=0.63--0.92,p< 0.001) and dimension-cued similarity significantly predicted unconstrained similarity (top half dimensions: r=0.78--0.96,p< 0.001). However, feature-based similarity could not explain unconstrained similarity on a dimension-by-dimension basis, but only by linearly combining all implicit similarity dimensions into an aggregate measure (equal-weight: r=0.35,p< 0.05; optimal-weight: r=0.65,p< 0.001). Furthermore, we observed an interaction between subjectivity and explicitness: subjective implicit dimensions explained more variance for explicit similarity, while objective explicit dimensions explained more variance for unconstrained similarity (subjectivity main effect p< 0.01, interaction p< 0.01). Together, our results suggest that feature-based and dimension-cued similarity may combine in a non-trivial way based on feature subjectivity to help generate similarity judgments. Given recent work showing interaction of cognitive control and infero-temporal regions in computing similarity judgments (Keung et al., 2016), our results provide an interesting hypothesis for elucidating the neural components of similarity and its susceptibility to attention and other sources of bias. Meeting abstract presented at VSS 2017

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.