Abstract

Abstract Despite the utility of neural networks (NNs) for astronomical time-series classification, the proliferation of learning architectures applied to diverse data sets has thus far hampered a direct intercomparison of different approaches. Here we perform the first comprehensive study of variants of NN-based learning and inference for astronomical time series, aiming to provide the community with an overview on relative performance and, hopefully, a set of best-in-class choices for practical implementations. In both supervised and self-supervised contexts, we study the effects of different time-series-compatible layer choices, namely the dilated temporal convolutional neural network (dTCNs), long-short term memory NNs, gated recurrent units and temporal convolutional NNs (tCNNs). We also study the efficacy and performance of encoder-decoder (i.e., autoencoder) networks compared to direct classification networks, different pathways to include auxiliary (non-time-series) metadata, and different approaches to incorporate multi-passband data (i.e., multiple time series per source). Performance—applied to a sample of 17,604 variable stars (VSs) from the MAssive Compact Halo Objects (MACHO) survey across 10 imbalanced classes—is measured in training convergence time, classification accuracy, reconstruction error, and generated latent variables. We find that networks with recurrent NNs generally outperform dTCNs and, in many scenarios, yield to similar accuracy as tCNNs. In learning time and memory requirements, convolution-based layers perform better. We conclude by discussing the advantages and limitations of deep architectures for VS classification, with a particular eye toward next-generation surveys such as the Legacy Survey of Space and Time, the Roman Space Telescope, and Zwicky Transient Facility.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call