Abstract

Text classification is a fundamental task in natural language processing. Most existing text classification models focus on constructing sophisticated high-level text features but ignore the importance of word features. Those models only use low-level word features obtained from a linear layer as input. To explore how the quality of word representations affects text classification, we propose a deep architecture which can extract high-level word features to perform text classification. Specifically, we use different temporal convolution filters, which vary in size, to capture different contextual features. Then a transition layer is used to coalesce the contextual features and form an enriched high-level word representations. We also find that word feature reuse is useful in our architecture to enrich word representations. Extensive experiments on six publically available datasets show that enriched word representations can significantly improve the performance of classification models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.