Abstract
Choosing the more humorous edited headline is a subfield of humor detection and generation tasks. This paper tries to deal with the second subtask of the SemEval-2020 shared task, “Assessing Humor in Edited News Headlines”. It aims to determine how machines can understand humor generated by an atomic edit to the original headline and automatically pick up the funnier version among two different edits. Given that both substitute words on the same original text are scored using crowdsourcing, we attempt not only classification but also the regression model on this special task. As for the training process, we first consider using two different embedding approaches, including GloVe and BERT, then further use different forms of neural network such as a fully connected layer, BiLSTM, and GRU. According to the experimental results, our BERT-based model gets a 64% accuracy performance, ranking second in the competition over 50 teams. Furthermore, by comparing the result and performance of the above models, we pick up some classic wrongly predicted samples and analyze the potential reasons for future study. The experimental results illustrate that mainly the revised sentence accounts for edit humor, whereas the original sentence does not have any effect. Besides, the combination of the revised and original sentence as input receives the best output, which shows that edit humor is probably produced from the edited sentence and the difference before and after modification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.