Old photos preserve the memories of the old days and have special meanings in life. However, they often suffer from severe and complex degradations, such as cracks, dirt, and noise. Old photo restoration is thus a meaningful but challenging task. Most of the current deep learning methods to repair corrupted old photos focus on one single type of degradation or limited number of degradations. The lack of proper old photo datasets makes the digital restoration problem even more challenging. In this paper, we propose MDTNet, which is a novel transformer-based method for restoring old photos with multiple degradations. MDTNet is an efficient end-to-end transformer architecture with only a single encoder and a decoder to complete the restoration. Specifically, a novel partial transformer encoder is designed with partial multi-head self-attention to extract local-global context information from the valid masked area, which represents the unbroken area. In addition, a degradation-aware module is proposed in the decoder to automatically learn different degradation information, to improve the effect of multiple degradation restoration. Moreover, a synthetic old photo dataset — the SynOld dataset — was built, which simulates the multiple degradations of real old photos. Experimental results show that the proposed method outperforms the state-of-the-art methods on both public datasets and the custom-developed dataset. The code is available at https://github.com/YuanZhaoc/MDTNet.
Read full abstract