Abstract

Deploying super-resolution models on metaverse terminal devices can enhance visual effects without increasing network bandwidth. However, deploying most current super-resolution networks on metaverse terminal devices with limited hardware resources poses a challenge due to their large volumes and high computing power consumption. In this paper, we present a lightweight separation and distillation network (LSDN) aimed at reducing the model complexity by prioritizing network structure. Specifically, we initially present the blueprint separable convolution (BSConv) to decrease model complexity and utilize the BSConv and information distillation mechanism building the channel separation distillation block (CSDB). Subsequently, we develop the enhanced spatial attention block (ESA) and Fused-MBConv (FMBConv) to explore latent information. In addition, we employ three CSDBs, an ESA, and an FMBConv to construct the residual attention unit (RAU). Finally, we concatenate several RAUs and amalgamate their hierarchical results, and transmit them to the upsampler for reconstructing the high-resolution images. We carried out comprehensive experiments on a range of datasets and found conclusive evidence that the LSDN outperforms state-of-the-art approaches, exhibiting notable enhancements in quantitative and qualitative terms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call