Abstract

Video captioning is the process of describing the content of a sequence of images capturing its semantic relationships and meanings. Dealing with this task with a single image is arduous, not to mention how difficult it is for a video (or image sequence). The amount and relevance of the applications of video captioning are vast, mainly to deal with a significant amount of video recordings in video surveillance, or assisting people visually impaired, to mention a few. To analyze where the efforts of our community to solve the video captioning task are, as well as what route could be better to follow, this manuscript presents an extensive review of more than 142 papers for the period of 2016 to 2022. As a result, the most-used datasets and metrics are identified and described. Also, the main approaches used and the best ones are analyzed and discussed. Furthermore, we compute a set of rankings based on several performance metrics to obtain, according to its reported performance, the best method with the best result on the video captioning task across of several datasets and metrics. Finally, some insights are concluded about which could be the next steps or opportunity areas to improve dealing with this complex task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call