Abstract
Detecting suggestions in online review requires contextual understanding of review text, which is an important real-world application of natural language processing. Given the disparate text domains found in product reviews, a common strategy involves fine-tuning bidirectional encoder representations from transformers (BERT) models using reviews from various domains. However, there hasn't been an empirical examination of how BERT models behave across different domains in tasks related to detecting suggestion sentences from online reviews. In this study, we explore BERT models for suggestion classification that have been fine-tuned using single-domain and cross-domain Amazon review datasets. Our results indicate that while single-domain models achieved slightly better performance within their respective domains compared to cross-domain models, the latter outperformed single-domain models when evaluated on cross-domain data. This was also observed for single-domain data not used for fine-tuning the single-domain model and on average across all tests. Although fine-tuning single-domain models can lead to minor accuracy improvements, employing multi-domain models that perform well across domains can help in cold start problems and reduce annotation costs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Indonesian Journal of Electrical Engineering and Computer Science
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.