Abstract

Abstract In the realm of radio astronomy, the detection of Radio Frequency Interference (RFI) stands as a pivotal pursuit. This study performs a novel comparative analysis of deep learning methodologies, and introduces a novel transfer learning method, called fine-tuning. We compare various aspects and factors relating to this problem, including supervised Fully Convolutional Network (FCN) architectures used within the literature, loss functions, regularization techniques, and training methodologies, to establish the most effective strategies for RFI detection. Moreover, the relationship between parameters, FLOPS, and inference times are examined. Fine-tuning involves pre-training models with low-quality AOFlagger reference outputs, a very popular and accessible RFI flagging software package, and thereafter re-training the models with high-quality reference outputs. We utilize two datasets: real observations from LOFAR and simulated data from HERA. The Mean Squared Error (MSE) loss function emerges as a robust performer if a high recall is desired. In contrast, the Binary Cross-Entropy (BCE) loss function excels in generalization but falls short in classification performance. The Dice loss function emerges as the top performer, maximizing the F1 score and thereby serving as the choice for our further investigations. Notably, we highlight the important role of data quality and model capacity. In particular, we find that low-capacity models exhibit resilience when trained with low-quality flags from AOFlagger, showcasing their ability to mitigate overfitting and overflagging tendencies. In contrast, high-capacity models excel when trained with high-quality flags. Fine-tuning proved to be an effective method to unlearn the overflagging tendencies of AOFlagger, whilst requiring very little data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call