Abstract

Recently, Deep Convolutional Neural Networks (DCNNs) have achieved remarkable progress in computer vision community, including in style transfer tasks. Normally, most methods feed the full image to the DCNN. Although high-quality results can be achieved in this manner, several underlying problems arise. For one, with the increase in image resolution, the memory footprint will increase dramatically, leading to high latency and massive power consumption. Furthermore, these methods are usually unable to integrate with the commercial image signal processor (ISP), which processes the image in a line-sequential manner. To solve the above problems, we propose a novel ISP-friendly deep learning-based style transfer algorithm: SequentialStyle. A brand new line-sequential processing mode is proposed, where the image is torn into strips, and each strip is sequentially processed, contributing to less memory demand. We further propose a Spatial-Temporal Synergistic (STS) mechanism that decouples the previously simplex 2-D image style transfer into spatial feature processing (in-strip) and temporal correlation transmission (in-between strips). Compared with the SOTA style transfer algorithms, experimental results show that our SequentialStyle is competitive. Besides, SequentialStyle has less demand for memory consumption, even for the images whose resolutions are 4 k or higher.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call