Abstract
The catenary fault detection method based on image processing technique plays an important role in the railway transportation safety system, and image segmentation is the hot and critical stage of this detection method. Conventional pulse coupled neural network (PCNN) model has too many parameters which should be set with values in advance for image segmentation. However, the setting of parameters is critical but complex task, and the segmentation effect and efficiency significantly rely on the network parameters setting. To overcome the above mentioned disadvantages, the simplified PCNN (SPCNN) model based on the conventional PCNN is introduced. First, simplify the input domain, the external input signal is directly used as the input neuron. Then the linking input and the dynamic threshold of the neuron are simplified. Furthermore, the linking coefficients of the modulation field are adaptively determined by normalized mean square error, and the iteration number is adaptively determined in accordance with the minimum cross entropy. In this simplified model, some parameters are reduced but the important mechanisms of PCNN are still remained. Finally, several sets of real-time collected catenary images are segmented by the SPCNN model with adaptive parameters. Experiments results show that the proposed method not only significantly improves image segmentation performance than the conventional image segmentation methods but also shows the continuity and integrity of the segmented images, especially for the pull rods, posts, insulators and other parts of catenary. Furthermore, it is superior to the conventional image segmentation in terms of parameters setting, visual appearance and objective evaluation criteria of VOIand PRIvalues.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.