Abstract

Image captioning is a fundamental task connecting computer vision and natural language processing. Recent researches usually concentrate on generic image captioning or video captioning among thousands of classes. However, they fail to cover detailed semantics and cannot effectively deal with a specific class of objects, such as pedestrian. Pedestrian captioning plays a critical role for analysis, identification and retrieval in massive collections of video data. Therefore, in this paper, we propose a novel approach to generate multi-view captions for pedestrian images with a topic attention mechanism on global and local semantic regions. Firstly, we detect different local parts of pedestrian and utilize a deep convolutional neural network (CNN) to extract a series of features from these local regions and the whole image. Then, we aggregate these features with a topic attention CNN model to produce a representative vector richly expressing the image from a different view at each time step. This feature vector is taken as input to a hierarchical recurrent neural network to generate multi-view captions for pedestrian images. Finally, a new dataset named CASIA_Pedestrian including 5000 pedestrian images and sentences pairs is collected to evaluate the performance of pedestrian captioning. Experiments and comparison results show the superiority of our proposed approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.