Abstract

The cross-modal oil painting image generated by traditional methods makes it easy to miss the important information of the target part, and the generated image lacks realism. This paper combines the feature extraction technology of multimedia data with the generation confrontation network in deep learning, puts forward a generation model of classic oil painting, and applies it to university teaching. Firstly, the key frame extraction algorithm is used to extract the key frames in the video, and the channel attention network is introduced into the pre-trained ResNet-50 network to extract the static features of 2D images in short oil painting videos. Then, the depth feature mapping is carried out in the time dimension by using the double-stream I3D network, and the feature representation is enhanced by combining static and dynamic features. Finally, the high-dimensional features in the depth space are mapped to the two-dimensional space by using the opposition generation network to generate classic oil painting pictures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call