Abstract

Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as “human”, “mammal”, and “animal”). The feature-based model includes both object parts (such as “eye”, “tail”, and “handle”) and other descriptive features (such as “circular”, “green”, and “stubbly”). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation.

Highlights

  • Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other

  • We have shown that visual features can explain the inferior temporal (IT) representation to a considerable extent and that categorical predictors do not explain additional IT variance beyond that explained by features

  • Only visual features related to categories appeared effective at explaining IT representational variance

Read more

Summary

Introduction

Importance of category membership in explaining IT responses. Object category membership is a characteristic of the whole object, and requires a representation that is invariant to variations in visual appearance among members of the same category. Perceived object similarity has been shown to reflect both the continuous and categorical components of the IT object representation (Edelman et al, 1998; Op de Beeck et al, 2001, 2008b; Haushofer et al, 2008; Mur et al, 2013) This leaves open what the relative contributions of visual features and categories are to perceived object similarity. The feature-based model includes both object parts (such as “eye”, “tail”, “handle”) and other descriptive features (such as “circular”, “green”, and “stubbly”) These rich models (114 category dimensions, 120 feature-based dimensions) were fitted to the brain representation of the objects in IT and early visual cortex (based on functional magnetic resonance imaging data), and to human similarity judgments for the same set of objects. We used representational similarity analysis (Kriegeskorte et al, 2008a; Nili et al, 2014) to compare the performance of the feature-based and categorical models in explaining the IT representation and the similarity judgments

Methods
Object-similarity judgments
Defining the categorical and feature-based models
Experiment 1
Experiment 2
Non-negative least-squares fitting of the representational models
Comparing the explanatory power of categorical and featurebased models
What dimensions do the categorical and feature-based model consist of?
Visual features as stepping stones toward semantics
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call