With the improvement of people's living standards and the rapid growth of national technological strength, wearable devices have gradually integrated into people's daily lives, making them convenient for people's work and lives. Traditional fabric detection methods often rely on manual visual inspection, which has low efficiency and accuracy that does not meet the requirements. In view of this, the research has innovatively integrated GAN, CNN, field Programmable gate array (FPGA), multi-stage image recovery algorithm based on graph neural network, for high frequency and low frequency fusion, and combined it with wearable devices to develop a new wearable fabric recovery and recognition system. The results showed that in the COCO dataset, after 250 iterations, the WFR2S algorithm achieved a damage identification accuracy of 95.5 %, a mean absolute error of 0.011, a real-time processing speed of 30 fps, and a resource consumption of 2.4 W. These data demonstrate the significant performance advantages of WFR2S in image recovery and damage detection. The newly developed image recovery algorithm improves the accuracy, availability and stability of damage recognition in garment fabrics. This study not only has theoretical value but also has broad application prospects in practical application. The aim is to contribute to the intelligent development of the garment industry.