Abstract

As technology accelerates the generation and communication of textual data, the need to automatically understand this content becomes a necessity. In order to classify text, being it for tagging, indexing, or curating documents, one often relies on large, opaque models that are trained on preannotated datasets, making the process unexplainable, difficult to scale, and ill-adapted for niche domains with scarce data. To tackle these challenges, we propose ProZe, a text classification approach that leverages knowledge from two sources: prompting pretrained language models, as well as querying ConceptNet, a common-sense knowledge base which can be used to add a layer of explainability to the results. We evaluate our approach empirically and we show how this combination not only performs on par with state-of-the-art zero shot classification on several domains, but also offers explainable predictions that can be visualized.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.