ABSTRACTResearch on semantic similarity between relatively short texts, for example, at word‐ and sentence‐level, has progressed significantly in recent years. However, paragraph‐level similarity has not been researched in as much detail owing to the challenges associated with embedding representations, despite its utility in numerous applications. A rudimentary approach to paragraph‐level similarity involves treating each paragraph as an elongated sentence, thereby encoding the entire paragraph into a single vector. However, this results in the loss of long‐distance dependency information, ignoring interactions between sentences belonging to different paragraphs. In this paper, we propose a simple yet efficient method for estimating paragraph similarity. Given two paragraphs, it first obtains a vector for each sentence by leveraging advanced sentence‐embedding techniques. Next, the similarity between each sentence in the first paragraph and the second paragraph is estimated as the maximum cosine similarity value between the sentence and each sentence in the second paragraph. This process is repeated for all sentences in the first paragraph to determine the maximum similarity of each sentence with the second paragraph. Finally, overall paragraph similarity is computed by averaging the maximum cosine similarity values. This method alleviates long‐range dependency by embedding sentences individually. In addition, it accounts for sentence‐level interactions between the two paragraphs. Experiments conducted on two benchmark data sets demonstrate that the proposed method outperforms the baseline approach that encodes entire paragraphs into single vectors.
Read full abstract