Abstract

Target detection using attention models has recently become a major research topic in active vision. One of the major problems in this area of research is how to appropriately weight low-level features to get high quality top-down saliency maps that highlight target objects. Learning of such weights has previously been done using example images having similar feature distributions without considering contextual information. In this paper, we propose a model that we refer to as the top-down contextual weighting (TDCoW) that incorporates high-level knowledge of the gist context of images to apply appropriate weights to the features. The proposed model is tested on four challenging datasets, two for cricket balls, one for bikes and one for person detection. The obtained results show the effectiveness of contextual information for modelling the TD saliency by producing better feature weights than those produced without contextual information.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.