Abstract

Automatically describing images with natural sentences, also known as image captioning, is a challenging research problem at the intersection of computer vision and natural language processing which has recently become very popular in the literature. With the advances in deep learning, recently proposed image captioning approaches are all based on deep artificial neural networks. However, most of these methods focus on the English language, which greatly restricts their use for Turkish. Turkish is an agglutinative language and suffixes might change the meaning of a word entirely, hence an image captioning approach specifically designed for Turkish should consider the characteristics of the language. In this study, we propose such an image captioning model, which utilizes subword units. Our experimental results show that this model provides results which are much better than the word-based model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call