Abstract

This research set out to identify and structure the words and expressions related to customers’ likes and dislikes found in online reviews on order to guide product development. Previous methods have mainly focused on product features, yet reviewers express preferences on much more. Here we build on an extensive review of the design science literature to propose a summarization model containing multiple aspects of user preference, such as product affordances, emotions, and usage conditions. We also uncover the linguistic patterns describing these aspects of preference, and draft them into annotation guidelines. A case study demonstrates that our proposed model and annotation guidelines enable human annotators to structure the online reviews with high inter-agreement. As high inter-rater-agreement human annotation results are an essential step towards automating the online review summarization process with natural language processing, this study brings material for future automatization research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call