Abstract

Screen content coding have been supported recently in Versatile Video Coding (VVC) to improve the coding efficiency of screen content videos by adopting new coding modes which are dedicated to screen content video compression. Two new coding modes called Intra Block Copy (IBC) and Palette (PLT) are introduced. However, the flexible quad-tree plus multi-type tree (QTMT) coding structure for coding unit (CU) partitioning in VVC makes the fast algorithm of the SCC particularly challenging. To efficiently reduce the computational complexity of SCC in VVC, we propose a deep learning based fast prediction network, namely FastSCCNet, where a fully convolutional network (FCN) is designed. CUs are classified into natural content block (NCB) and screen content block (SCB). With the use of FCN, only one shot inference is needed to classify the block types of the current CU and all corresponding sub-CUs. After block classification, different subsets of coding modes are assigned according to the block type, to accelerate the encoding process. Compared with the conventional SCC in VVC, our proposed FastSCCNet reduced the encoding time by 29.88% on average, with negligible bitrate increase under all-intra configuration. To the best of our knowledge, it is the first approach to tackle the computational complexity reduction for SCC in VVC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call