Abstract

Remote sensing image (RSI) captioning aims to generate sentences to describe the content of RSIs. Generally, five sentences are used to describe the RSI in caption datasets. Every sentence can just focus on part of images’ contents due to the different attention parts of annotation persons. One annotated sentence may be ambiguous compared with other four sentences. However, previous methods, treating five sentences separately, may generate an ambiguous sentence. In order to consider five sentences together, a collection of words, which named topic words contained common information among five sentences, is jointly incorporated into a captioning model to generate a determinate sentence that covers common contents in RSIs. Instead of employing a naive recurrent neural network, a memory network in which topic words can be naturally included as memory cells is introduced to generate sentences. A novel retrieval topic recurrent memory network is proposed to utilize the topic words. First, a topic repository is built to record the topic words in training datasets. Then, the retrieval strategy is exploited to obtain the topic words for a test image from topic repository. Finally, the retrieved topic words are incorporated into a recurrent memory network to guide the sentence generation. In addition to getting topics through retrieval, the topic words of test images can also be edited manually. The proposed method sheds light on controllability of caption generation. Experiments are conducted on two caption datasets to evaluate the proposed method.

Highlights

  • R EMOTE sensing image (RSI) captioning aims to generate a concise sentence automatically given a high-resolution RSI [1]

  • 2) A novel retrieval topic recurrent memory network (MN) is proposed to utilize the topic words as a part of extensible memory cells, which can overcome the shortcoming of long-term information dilution in recurrent neural network (RNN)

  • The results of mRNN and mLSTM are from paper [1]

Read more

Summary

Introduction

R EMOTE sensing image (RSI) captioning aims to generate a concise sentence automatically given a high-resolution RSI [1]. Many traditional remote sensing tasks concentrate on image processing or low-level semantic information. Different from previous tasks, an RSI captioning task concentrates on generating high-level semantic information (a descriptive sentence) and has received a significant amount of attention [1], [11]–[13]. The automatic caption generation can provide more semantic information about an RSI. Template-based methods generate sentences based on object detection [12]. Sentences are generated based on the detection results by filling fixed sentence templates that lack a subject or an object. Sentences generated by template-based methods are relatively simple and mode fixed.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call