Abstract

Multi-class learning (MCL) methods perform Automatic Text Classification (ATC), which requires labeling for all classes. MCL fails when there is no well-defined class information and requires a great eff ort in labeling. One-Class Learning (OCL) can mitigate these limitations since the training only has instances from one class, reducing the labeling eff ort and making the ATC more appropriate for open-domain applications. However, OCL is more challenging due to the lack of counterexamples. Even so, most studies use unimodal representations, even though different domains contain other information (modalities). Thus, this study proposes the Multimodal Variational Autoencoder (MVAE) for OCL. MVAE is a multimodal method that learns a new representation from more than one modality, capturing the characteristics of the interest class in an adequate way. MVAE explores semantic, density, linguistic, and spatial information modalities. The main contribution is a new multimodal method for representation learning on OCL scenarios considering few instances to train with state-of-the-art results in three domains.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.