Abstract
Supervised learning and unsupervised learning are mainstream methods to solve semantic textual similarity tasks. However, it is obvious that supervised learning needs substantial labeled data which is hard to obtain in reality. Therefore, we turn our attention to construct sentence embeddings using unlabelled data due to lack of annotated data and success of unsupervised word embeddings in multiple tasks. We present a simple but efficient unsupervised learning method of sentence embeddings inspired by attention mechanism, in which weighted contexts are added to models to train distributed sentence representations inspired by word2vec. Our method outperforms state-of-the-art unsupervised models on semantic textual similarity tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.