Abstract

This paper studies the class-agnostic counting problem, which aims to count objects regardless of their class, and relies only on a limited number of exemplar objects. Existing methods usually extract visual features from query and exemplar images, compute similarity between them using convolution operations, and finally use this information to estimate object counts. However, these approaches often overlook the scale information of the exemplar objects, leading to lower counting accuracy for objects with multi-scale characteristics. Additionally, convolution operations are local linear matching processes that may result in a loss of semantic information, which can limit the performance of the counting algorithm. To address these issues, we devise a new scale-aware transformer-based feature fusion module that integrates visual and scale information of exemplar objects and models similarity between samples and queries using cross-attention. Finally, we propose an object counting algorithm based on a feature extraction backbone, a feature fusion module and a density map regression head, called SATCount. Our experiments on the FSC-147 and the CARPK demonstrate that our model outperforms the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.