Abstract

Images are an essential feature of many social networking services, such as Facebook, Instagram, and Twitter. Through brand-related images, consumers communicate about brands with each other and link the brand with rich contextual and consumption experiences. However, previous articles in marketing research have concentrated on deriving brand information from textual user-generated content and have largely not considered brand-related images. The analysis of brand-related images yields at least two challenges. First, the content displayed in images is heterogeneous, and second, images rarely show what users think and feel in or about the situations displayed. To meet these challenges, this article presents a two-step approach that involves collecting, labeling, clustering, aggregating, mapping, and analyzing brand-related user-generated content. The collected data are brand-related images, caption texts, and social tags posted on Instagram. Clustering images labeled via Google Cloud Vision API enabled to identify heterogeneous contents (e.g. products) and contexts (e.g. situations) that consumers create content about. Aggregating and mapping the textual information for the resulting image clusters in the form of associative networks empowers marketers to derive meaningful insights by inferring what consumers think and feel about their brand regarding different contents and contexts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.