Routine visual inspection is fundamental to the preventive maintenance of power equipment. Convolutional neural networks (CNNs) substantially reduce the number of parameters and efficiently extract image features for classification tasks. In the actual production and operation process of substations, due to the limitation of safety distance, camera monitoring, inspection robots, etc., cannot be very close to the target. The operational environment of power equipment leads to scale variations in the main target and thus compromises the performance of conventional models. To address the challenges posed by scale fluctuations in power equipment image datasets, while adhering to the requirements for model efficiency and enhanced inter-channel communication, this paper proposed the ResNet Cross-Layer Parameter Sharing (ResNetCPS) framework. The core idea is that the network output should remain consistent for the same object at different scales. The proposed framework facilitates weight sharing across different layers within the convolutional network, establishing connections between pertinent channels across layers and leveraging the scale invariance inherent in image datasets. Additionally, for substation image processing mainly based on edge devices, smaller models must be used to reduce the expenditure of computing power. The Cross-Layer Parameter Sharing framework not only reduces the overall number of model parameters but also decreases training time. To further enhance the representation of critical features while suppressing less important or redundant ones, an Inserting and Adjacency Attention (IAA) module is designed. This mechanism improves the model’s overall performance by dynamically adjusting the importance of different channels. Experimental results demonstrate that the proposed method significantly enhances network efficiency, reduces the total parameter storage space, and improves training efficiency without sacrificing accuracy. Specifically, models incorporating the Cross-Layer Parameter Sharing module achieved a reduction in the number of parameters and model size by 10% to 30% compared to the baseline models.
Read full abstract