Abstract

We propose a “visual listening in” approach (i.e., mining visual content posted by users) to measure how brands are portrayed on social media. Using a deep-learning framework, we develop BrandImageNet, a multi-label convolutional neural network model, to predict the presence of perceptual brand attributes in the images that consumers post online. We validate model performance using human judges, and we find a high degree of agreement between our model and human evaluations of images. We apply the BrandImageNet model to brand-related images posted on social media and compute a brand-portrayal metric based on model predictions for 56 national brands in the apparel and beverages categories. We find a strong link between brand portrayal in consumer-created images and consumer brand perceptions collected through survey tools. Images are close to surpassing text as the medium of choice for online conversations. They convey rich information about the consumption experience, attitudes, and feelings of the user. We show that valuable insights can be efficiently extracted from consumer-created images. Firms can use the BrandImageNet model to automatically monitor in real time their brand portrayal and better understand consumer brand perceptions and attitudes toward theirs and competitors’ brands.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call