Abstract
Sequential visual task usually requires to pay attention to its current interested object conditional on its previous observations. Different from popular soft attention mechanism, we propose a new attention framework by introducing a novel conditional global feature which represents the weak feature descriptor of the current focused object. Specifically, for a standard CNN (Convolutional Neural Network) pipeline, the convolutional layers with different receptive fields are used to produce the attention maps by measuring how the convolutional features align to the conditional global feature. The conditional global feature can be generated by different recurrent structure according to different visual tasks, such as a simple recurrent neural network for multiple objects recognition, or a moderate complex language model for image caption. Experiments show that our proposed conditional attention model achieves the best performance on the SVHN (Street View House Numbers) dataset with / without extra bounding box; and for image caption, our attention model generates better scores than the popular soft attention model.
Highlights
Recent successes in machine translation [1], speech recognition [2], and image caption [3] have witnessed the important role of attention mechanism
He et al.: Conditionally Learn to Pay Attention for Sequential Visual Task employing attention in image caption [6]–[9] and the new attention mechanism in [12], we propose a novel conditional attention framework for the sequential visual tasks
For weakly supervised segmentation and image captioning, we demonstrate that a language model can be incorporated into this attention framework
Summary
Recent successes in machine translation [1], speech recognition [2], and image caption [3] have witnessed the important role of attention mechanism. Several kinds of attention approaches have been proposed to tackle challenging visual tasks. [4], [5] proposed the hard attention mechanism for multiple objects recognition by only glimpsing a few of local patches of a large image, [6] proposed both soft and hard attention methods for image caption, which enables the model to automatically generate a caption describing the content of an image. [7]–[9] improved the performance of image caption by incorporating more structure information into the soft attention framework. Attention approaches were introduced into the visual question answering task (VQA), which greatly improved the overall performance [9]–[11].
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have