Abstract

Text classification is a fundamental and central position in natural language processing. There are many solutions to the text classification problem, but few use the semantic combination of multiple perspectives to improve the classification performance. The paper proposes a dual-channel attention network model called DCAT, which uses the complementarity between semantics to refine the understanding deficit. Specifically, DCAT first captures the logical semantics of the text through transductive learning and graph structure. Then, at the attention fusion layer (Channel), we use the logical semantics to perform joint semantic training on other semantics to correct the predictions of unlabeled test data incrementally. Experiments show that DCAT can achieve more accurate classification on a wide range of text classification datasets, which is vital for subsequent text mining tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call