Abstract

Visual content on platforms like Instagram and TripAdvisor plays a crucial role in consumer decision-making, particularly in intangible sectors like hospitality, where consumers rely on such information to gauge service quality prior to consumption. In this study, we introduce an approach leveraging deep neural networks to identify high-level, abstract concepts in visual user-generated content (UGC) for restaurants. Given the lack of annotations on these concepts, we propose two weak labeling methods: one utilizing existing restaurant-quality signals, and the other extracting relevant labels from review texts via questionnaires. Our findings reveal that models trained on these inexpensive weak labels demonstrate moderate correlations with human judgments. Furthermore, we showcase the effectiveness of gradient-based techniques in generating visual explanations, highlighting image regions that support neural network predictions. These methods enable visual UGC analysis tasks with minimal labeling effort and allow practitioners to interpret deep neural network predictions more effectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call