Abstract

Efficient processing of large-scale multimodal sensor data is a key issue for applying the Internet of Things (IoT). Accurate cloud classification is critical for weather and climate monitoring, which are parts of IoT applications. In this paper, we propose a novel generative deep model named multimodal generative adversarial network (Multimodal GAN) to improve both the energy efficiency and the cloud classification accuracy in IoT. The proposed Multimodal GAN is composed of a discriminator and a generator, each of which is devised to a two-stream network. The branches of two-stream structure correspond to the cloud visual information and the cloud scalar information, respectively. Therefore, the Multimodal GAN is capable of generating the cloud visual information and cloud scalar information simultaneously. Afterward, the training set is extended by the generated multimodal cloud samples, and the deep multimodal cloud classification model is trained by the extended training set. As a result, the classification model possesses high generalization ability and is less prone to be over-fitting. Moreover, the feature representations extracted from the classification model reflect the salient information of raw multimodal cloud data, and therefore they can be stored and transmitted in IoT. The effectiveness of the proposed method in energy efficiency and cloud classification is validated on the multimodal cloud dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.