Abstract
Cross-media visual communication is an extremely complex task. In order to solve the problem of segmentation of visual foreground and background, improve the accuracy of visual communication scene reconstruction, and complete the task of visual real-time communication. We propose an improved generative adversarial network. We take the generative adversarial network as the basis and add a combined codec package to the generator, while configuring the generator and discriminator as a cascade structure, preserving the feature upsampling and downsampling convolutional layers of visual scenes with different layers through correspondence. To classify features with different visual scene layers, we add a new auxiliary classifier based on convolutional neural networks. With the help of the auxiliary classifier, similar visual scenes with different feature layers have a more accurate recognition rate. In the experimental part, to better distinguish foreground and background in visual communication, we perform performance tests on foreground and background using separate datasets. The experimental results show that our method has good accuracy in both foreground and background in cross-media communication for real-time visual communication. In addition, we validate the efficiency of our method on Cityscapes, NoW, and Replica datasets, respectively, and experimentally demonstrate that our method performs better than traditional machine learning methods and outperforms deep learning methods of the same type.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.