Abstract

Word embeddings are used to represent words as distributed features, which can boost the performance on sentiment analysis tasks. However, most word embeddings consider only semantic and syntactic information and ignore sentiment information. Words with opposite sentiment polarities can have similar word embeddings (e.g., happy and sad or good and bad) as they have similar contexts. For incorporating sentiment information into word vectors, some approaches to sentiment embeddings are proposed. Based on the end-to-end architectures, these methods typically take the sentiment labels of whole sentences as outputs and use them to propagate gradients that update the context word vectors. Therefore, if the polarities of context words are inconsistent, they will still share the same gradient for updating. To address this, we have proposed an adversarial learning method for training sentiment word embeddings, in which the discriminator is employed to force the generator to produce high-quality word embeddings by using semantic and sentiment information. Additionally, the generator applies the multi-head self-attention to re-weight the gradients so that sentiment and semantic information are efficiently captured. Comparative experiments have been conducted with the word- and sentence-level benchmarks. The results demonstrate that the proposed method has outperformed previous sentiment embedding training models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call