Abstract
Archaeological illustration is a graphic recording technique that delineates the shape, structure, and ornamentation of cultural artifacts using lines, serving as vital material in archaeological work and scholarly research. Aiming at the problems of low line accuracy in the results of current mainstream image generation algorithms and interference caused by severe mural damage, this paper proposes a mural archaeological illustration generation algorithm based on multi-branch feature cross fusion (U2FGAN). The algorithm optimizes skip connections in U2Net through a channel attention mechanism, constructing a multi-branch generator consisting of a line extractor and an edge detector, which separately identify line features and edge information in artifact images before fusing them to generate accurate, high-resolution illustrations. Additionally, a multi-scale conditional discriminator is incorporated to guide the generator in outputting high-quality illustrations with clear details and intact structures. Experiments conducted on the Dunhuang mural illustration datasets demonstrate that compared to mainstream counterparts, U2FGAN reduced the Mean Absolute Error (MAE) by 10.8% to 26.2%, while also showing substantial improvements in Precision (by 9.8% to 32.3%), Fβ-Score (by 5.1% to 32%), and PSNR (by 0.4 to 2.2 dB). The experimental results show that the proposed method outperforms other mainstream algorithms in archaeological illustration generation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.