Abstract

Health mention classification (HMC) involves the classification of an input text as health mention or not. Figurative and non-health mention of disease words makes the classification task challenging. Learning the context of the input text is the key to this problem. The idea is to learn word representation by its surrounding words and utilize emojis in the text to help improve the classification results. In this paper, we improve the word representation of the input text using adversarial training that acts as a regularizer during fine-tuning of the model. We generate adversarial examples by perturbing the word embeddings of the model and then train the model on a pair of clean and adversarial examples. Additionally, we utilize contrastive loss that tries to learn similar representations for the clean example and its perturbed version. We train and evaluate the method on three public datasets. Experiments show that contrastive adversarial training improves the performance significantly in terms of F1-score over the baseline methods of both BERT<sub>Large</sub> and RoBERTa<sub>Large</sub> on all three datasets. Furthermore, we provide a brief analysis of the results by utilizing the power of explainable AI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call