Abstract

With the progress of artificial intelligence, the embedded generation of images and videos by AI has become a hot topic. This technology is tried to be applied in real-time processing of monitors, cameras and smart phones. Using GAN embedder networks, some research attempts to transfer the style, action and content of one video to another target video. Unfortunately, this generation is often difficult to control. We have created an Engine-GAN (E-GAN) model process, which effectively combines engine method with embedder GAN content to create "real" videos that can be edited in real time. This makes the image and content generated by AI directly controlled. We have made progress in E-GAN architecture, E-GAN workflow, tag generation and entity stylization. We use cuckoo algorithm to optimize the migration target and improve the migration efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call