Abstract

The comprehensive utilization of incomplete multi-modality data is a difficult problem with strong practical value. Most of the previous multimodal learning algorithms require massive training data with complete modalities and annotated labels, which greatly limits their practicality. Although some existing algorithms can be used to complete the data imputation task, they still have two disadvantages: (1) they cannot control the semantics of the imputed modalities accurately; and (2) they need to establish multiple independent converters between any two modalities when extended to multimodal cases. To overcome these limitations, we propose a novel doubly semi-supervised multimodal learning (DSML) framework. Specifically, DSML uses a modality-shared latent space and multiple modality-specific generators to associate multiple modalities together. Here we divided the shared latent space into two independent parts, the semantic labels and the semantic-free styles, which allows us to easily control the semantics of generated samples. In addition, each modality has its own separate encoder and classifier to infer the corresponding semantic and semantic-free latent variables. The above DSML framework can be adversarially trained by using our specially designed softmax-based discriminators. Large amounts of experimental results show that the DSML obtains better performance than the baselines on three tasks, including semi-supervised classification, missing modality imputation and cross-modality retrieval.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call