Abstract

Hyperspectral images are capable of significantly increasing the accuracy of textile color measurement because of their rich information. However, hyperspectral imaging generally requires expensive equipment and complex operations. If the hyperspectral information can be reconstructed based on a single RGB image, it can facilitate the widespread application of hyperspectral imaging technology, such as in textile color measurement. In this paper, a deep learning model was proposed for hyperspectral reconstruction of cotton and linen fabrics based on the conditional generative adversarial network. According to this model, the encoder–decoder structure and spatial pyramid convolution pooling operation were adopted to fuse multi-scale features for the prevention of mode collapse. Atrous convolution was introduced to increase the receptive field to adapt to the fabric texture information, and the hyperspectral information of the fabric from a single RGB image was reconstructed. The quantitative and qualitative tests verified that the method in this paper had good results. The root mean square error and peak signal-to-noise ratio were 0.0271 and 31.372, respectively, for reconstructed fabric hyperspectral images; the highest average color difference [Formula: see text] in the reconstructed hyperspectral colorimetry experiment was obtained as 2.755. Thus, the proposed method can meet the common application requirements of color measurement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call