Abstract
Efficient and effective methods are required to construct a model to rapidly extractdifferent sentiments from large volumes of text. To augment the performance of the models, contemporary developments in Natural Language Processing (NLP) have been utilized by researchers to work on several model architecture and pretraining tasks. This work explores several models based on transformer architecture and analyses its performance. In this work, the researchersusea dataset to answer the question of whether or not transformers work significantly well for figurative language and not just literal language classification. The results of various models are compared and have come up as a result of research over time. The studyexplains why it is necessary for computers to understand the occurrence of figurative language, why it is yet a challenge and is being intensively worked on to date, and how it is different from literal language classification. This research also covers how well these models train on a specific type of figurative language and generalize on a few other similar types.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.