Abstract

Sentence pair modelling is defined as the task of identifying the semantic interaction between a sentence pair, i.e., paraphrase and textual entailment identification and semantic similarity measurement. It constitutes a set of crucial tasks for research in the area of natural language understanding. Sentence representation learning is a fundamental technology for sentence pair modelling, where the development of the BERT model realised a breakthrough. We have recently proposed transfer fine-tuning using phrasal paraphrases to allow BERT’s representations to be suitable for semantic equivalence assessment between sentences while maintaining the model size. Herein, we reveal that transfer fine-tuning with simplified feature generation allows us to generate representations that are widely effective across different types of sentence pair modelling tasks. Detailed analysis confirms that our transfer fine-tuning helps the BERT model converge more quickly with a smaller corpus for fine-tuning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.