Abstract

Referring expressions are natural language descriptions of objects within a given scene. Context is of crucial importance for a referring expression, as the description not only depicts the properties of the object but also involves the relationships of the referred object with other ones. Most of previous work uses either the whole image or one particular contextual object as the context. However, the context of these approaches is holistic and insufficient, as a referring expression often describes relationships of multiple objects in an image. To leverage rich context information from all objects in an image, in this paper, we propose a novel scheme that is composed of a visual context long short-term memory (LSTM) module and a sentence LSTM module to model bundled object context for referring expressions. All contextual objects are arranged with their spatial locations and progressively fed into the visual context LSTM module to acquire and aggregate the context features. Then the concatenation of the learned context features and the features of the referred object are put into the sentence LSTM module to learn the probability of a referring expression. The feedback connections and internal gating mechanism of the LSTM cells enable our model to selectively propagate relevant contextual information through the whole network. Experiments on three benchmark datasets show that our methods can achieve promising results compared to state-of-the-art methods. Moreover, visualization of the internal states of the visual context LSTM cells also shows that our method can automatically select the pertinent context objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call