Abstract
Sentiment analysis uses a series of automated cognitive methods to determine the author's or speaker's attitudes toward an expressed object or text's overall emotional tendencies. In recent years, the growing scale of opinionated text from social networks has brought significant challenges to humans' sentimental tendency mining. The pretrained language model designed to learn contextual representation achieves better performance than traditional learning word vectors. However, the existing two basic approaches for applying pretrained language models to downstream tasks, feature-based and fine-tuning methods, are usually considered separately. What is more, different sentiment analysis tasks cannot be handled by the single task-specific contextual representation. In light of these pros and cons, we strive to propose a broad multitask transformer network (BMT-Net) to address these problems. BMT-Net takes advantage of both feature-based and fine-tuning methods. It was designed to explore the high-level information of robust and contextual representation. Primarily, our proposed structure can make the learned representations universal across tasks via multitask transformers. In addition, BMT-Net can roundly learn the robust contextual representation utilized by the broad learning system due to its powerful capacity to search for suitable features in deep and broad ways. The experiments were conducted on two popular datasets of binary Stanford Sentiment Treebank (SST-2) and SemEval Sentiment Analysis in Twitter (Twitter). Compared with other state-of-the-art methods, the improved representation with both deep and broad ways is shown to achieve a better F1 -score of 0.778 in Twitter and accuracy of 94.0% in the SST-2 dataset, respectively. These experimental results demonstrate the abilities of recognition in sentiment analysis and highlight the significance of previously overlooked design decisions about searching contextual features in deep and broad spaces.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.