Abstract
Understanding the robustness and rapidness of human scene categorization has been a focus of investigation in the cognitive sciences over the last decades. At the same time, progress in the area of image understanding has prompted computer vision researchers to design computational systems that are capable of automatic scene categorization. Despite these efforts, a framework describing the processes underlying human scene categorization that would enable efficient computer vision systems is still missing. In this study, we present both psychophysical and computational experiments that aim to make a further step in this direction by investigating the processing of local and global information in scene categorization. In a set of human experiments, categorization performance is tested when only local or only global image information is present. Our results suggest that humans rely on local, region-based information as much as on global, configural information. In addition, humans seem to integrate both types of information for intact scene categorization. In a set of computational experiments, human performance is compared to two state-of-the-art computer vision approaches that model either local or global information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.