Abstract

This paper proposes normalizing flow‐based image super‐resolution techniques using attention modules. In the proposed method, the features of the low‐resolution images are extracted using a Swin Transformer. Furthermore, multi‐head attention in the flow layers makes effective use of the feature maps. This architecture enables the efficient injection of low‐resolution image features extracted by the transformer into the flow layer. Experimental results at x4 magnifications showed that the proposed method achieved state‐of‐the‐art performance for quantitative metrics and visual quality among single‐loss architectures. © 2024 Institute of Electrical Engineer of Japan and Wiley Periodicals LLC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call