Abstract

Affective social multimedia computing provides us the opportunity to improve our daily lives. Various things, such as devices in ubiquitous computing environments and autonomous vehicles in real environments considering human beings, can be controlled by analyzing and learning affective social big data. Deep learning is a core learning algorithm for autonomous control; however, it requires huge amounts of learning data, and the process of collecting various types of learning data is expensive. The collection limit of affective social videos for deep learning is resolved by analyzing affective social videos, such as YouTube and Closed Circuit Television (CCTV) videos collected in advance, and generating new affective social videos more as learning data without human beings autonomously controlling other cameras. The control signals of the cameras are generated by Convolutional Neural Network (CNN)-based end-to-end controls. However, images captured consecutively need to be analyzed to improve the quality of the generated control signals. This paper proposes a system that generates affective social videos for deep learning by Convolutional Recurrent Neural Network (CRNN)-based end-to-end controls. The extracted images in affective social videos are utilized for calculating the control signals based on the CRNN. Additional affective social videos are then generated by the extracted consecutive images and camera control signals. The effectiveness of the proposed method was verified in the experiments by comparing the results obtained using the proposed method with those obtained using the traditional CNN. The results showed that the accuracy of the control signals obtained using the proposed method was 56.30% higher than that of the control signals obtained using the traditional CNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call