Abstract

In the information age, there is a growing need to process and analyze the great number of online reviews to understand consumer preferences and product reputations. Instead of addressing all online reviews as a simple group decision-making problem in the existing research, we propose a new preference learning (PL) mechanism to extract preferences by analyzing the diversity of preferences across different time frames. First, we collect and process online ratings from e-commerce platforms. Then, we construct an online optimization model based on online mirror descent to learn priority vectors that reflect various consumer preferences. We incorporate multiple learners to capture evolving preferences. In addition, we design experiments to verify the model involving the validity, robustness, as well as suggested parameters ranges. The model helps consumers and businesses capture ongoing preferences from massive online reviews. Importantly, the PL mechanism is innovatively designed to detect different preferences and learn different types of priorities with generating ratings online. The model offers more accurate preference information and represents a broader range of consumers’ behaviors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call