Abstract

For the problem that it will generate errors in the process of content feature transfer and cross-domain feature fusion caused by encoder-decoder style transfer networks, a content consistency preserving style transfer network is proposed. Source and target images are input to the encoding module for extracting content and style features respectively. Dual-chain feature transfer module constructs the same depth feature mapping with content encoder and decoder, and processes content feature parallelly by feature enhancement and feed-forward reference chains. Content and style features are fused in decoder and output style transfer result. With the simulation test based on multimodal unsupervised image-to-image translation (MUNIT), the style transfer effect of proposed network on vague objects and low contrast between the objects and background in low light environment is significantly increased. The experiment results on BDD100K dataset show that FID and IS are decreased by 3.2% and increased by 8.6% on average than MUNIT respectively, the proposed network can achieve content consistently and style accurately image trans-formation, which is able to application autonomous vehicle systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call