Abstract

Artificial intelligence (AI) applications about optical network management are increasing, especially in fault diagnosis. When the network fault or alarm occurs, the equipment running status and performance parameters are changing. AI algorithms can extract the internal relationship from the data and it is proved that the alarm prediction method based on AI is effective. However, the performance of AI models, especially supervised learning, rely on large data sets. These training sets with only a small number of samples are usually unavailable due to the lack of historical data. How to improve the generalization ability of AI models is still a key problem to be solved in the intelligent maintenance of optical networks. In optical network alarm prediction, performance data of devices have different degrees of relationship to alarms and the generalization performance of models is limited by the data structure. In this paper, we propose a method to optimize the model generalization ability in alarm prediction, i.e., the training of a small sample model is solved through parameter transfer in the training process. We verify the generalization effect of the models by using real data of a commercial optical network. Experimental results show that the models achieve significant performance improvement in the data set with a high correlation between performance and alarm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call