Abstract

Despite the increased exploration of machine learning (ML) techniques for the realization of autonomous optical networks, less attention has been paid to data quality, which is critical for ML performance. Failure management in optical networks using ML is constrained by the fact that some failures may occur more frequently than others, resulting in highly imbalanced datasets for the training of ML models. To address this limitation, a variational-autoencoder-based data augmentation technique is investigated in this paper, which can be used during data preprocessing to improve data quality. The synthetic data generated by the variational autoencoder are utilized to reduce imbalance in an experimental dataset used for training of neural networks (NNs) for failure management in optical networks. First, it is shown that, with a modified training dataset, the training time of NNs can be reduced. Reductions of up to 37.1% and 60.6% are achieved for failure detection and cause identification, respectively. Second, it is shown that improvement in the quality of the training dataset can reduce the computational complexity of NNs during the inference phase. As determined analytically, almost 68% reduction in computational complexity is achieved for the NN used for failure cause identification. Finally, data augmentation is shown to achieve improvement in classification accuracy. This work demonstrates improvement of up to 7.32%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.