Abstract

With the development of radio telescope technology, the quantity and types of acquired pulsar candidate data have increased dramatically. However, it is difficult to accurately identify pulsar candidates. Therefore, we propose to use multimodal fusion technology, called the multimodal fusion-based pulsar identification model (MFPIM), to build a deep learning model to improve the efficiency and accuracy of pulsar candidate identification. MFPIM treats each diagnostic plot of pulsar candidates as a modality and uses multiple convolutional neural networks to extract effective features from the diagnostic plots. After fusing the features, the commonality of different modalities in high-dimensional space is obtained to ensure that the model can take full advantage of the complementarity between diagnostic plots and thus identify pulsar candidates, achieved better classification performance than other current supervised learning algorithms. In addition, a channel attention mechanism is used in the model to enable it to learn the importance of different channel features so that the model focuses more on the channel information in the input data that is more meaningful for classification, reducing the model size while extracting pulsar diagnostic map features more accurately. We conducted experiments on the Five-hundred-meter Aperture Spherical radio Telescope (FAST) data set, and the results show that MFPIM can effectively identify the pulsars in the FAST data set with an identification accuracy of over 98%. To further verify the robustness of the model, we applied the MFPIM to the High Time Resolution Universe data set using transfer learning, with the test accuracy and F1 score reaching over 99%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call