Abstract

Global vectors, or global embeddings, are important word representations for many natural language processing tasks. With the popularity of dynamic embeddings (also known as contextual embeddings, such as ELMo and BERT) in recent years, attentions on global vectors have been diverted to a large extent. While, compared to the dynamic embeddings, the global embeddings are faster to train, straightforward to interpret, and eligible to be evaluated by many standard and credible intrinsic benchmarks (e.g., word similarity correlation and analogy accuracy). Thus, they are still widely-used in numerous downstream applications until now. However, the model design of the global embeddings has some limitations, making the learned word representations suboptimal. In this paper, we propose a novel method to deal with these limitations using PID control. To the best of our knowledge, this is one of the first efforts to leverage PID control in the research of word embeddings. Empirical results on standard intrinsic and extrinsic benchmarks show consistent performance boost of the proposed method, suggesting that the method proposed in this paper can be considered as a promising alternative to learn better word representations for the downstream tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.