Abstract

AbstractSocial media is an interesting source of information, specially when physical sensors are not available. In this paper, we explore several methodologies for the geolocation of multimodal information (image and text) from social networks. To this end, we use pre-trained neural network models for the classification of images and their associated texts. The result is a system that allows creating new synergies between image and text in order to geolocate information that has not been previously geotagged by any other way, which is a potentially relevant information for several purposes. Different experiments have been done revealing that, in general, text information is more accurate and relevant than images.KeywordsMultimodal classificationLocation-based retrievalTransformersSocial networks

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call