Abstract

Summarizing user reviews and classifying user sentiment are two critical tasks for modern e-commerce platforms. These two tasks can benefit each other by capturing the shared linguistic features. However, such a relationship has not been fully exploited by existing research on domain-specific contextual representations. This work explores a win-win strategy for a multi-task framework with three stages: general pre-training, adaptive pre-training, and collaborative fine-tuning. The task-adaptive continual pre-training on a language model can obtain domain-specific contextual representations, further used to improve two related tasks, sentiment classification and review summarization during the collaborative fine-tuning. Meanwhile, to effectively capture sentiment-oriented domain-specific contextual representations, we introduce a novel task-adaptive pre-training procedure, which adds a sentiment prediction task during the adaptive pre-training. Extensive experiments conducted on two adaption scenarios of a general-to-single domain and a general-to-multiple domain show that our framework outperforms state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.