Abstract

Show-through phenomena have always been a challenging issue in color-document image processing, which is widely used in various fields such as finance, education, and administration. Existing methods for processing color-document images face challenges, including dealing with double-sided documents with show-through effects, accurately distinguishing between foreground and show-through parts, and addressing the issue of insufficient real image data for supervised training. To overcome these challenges, this paper proposes a self-supervised-learning-based method for removing show-through effects in color-document images. The proposed method utilizes a two-stage-structured show-through-removal network that incorporates a double-cycle consistency loss and a pseudo-similarity loss to effectively constrain the process of show-through removal. Moreover, we constructed two datasets consisting of different show-through mixing ratios and conducted extensive experiments to verify the effectiveness of the proposed method. Experimental results demonstrate that the proposed method achieves competitive performance compared to state-of-the-art methods and can effectively perform show-through removal without the need for paired datasets. Specifically, the proposed method achieves an average PSNR of 33.85 dB on our datasets, outperforming comparable methods by a margin of 0.89 dB.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call