ABSTRACT In the new era of very large telescopes, where data are crucial to expand scientific knowledge, we have witnessed many deep learning applications for the automatic classification of light curves. Recurrent neural networks (RNNs) are one of the models used for these applications, and the Long Short-Term Memory (LSTM) unit stands out for being an excellent choice for the representation of long time series. In general, RNNs assume observations at discrete times, which may not suit the irregular sampling of light curves. A traditional technique to address irregular sequences consists of adding the sampling time to the network’s input, but this is not guaranteed to capture sampling irregularities during training. Alternatively, the Phased LSTM (PLSTM) unit has been created to address this problem by updating its state using the sampling times explicitly. In this work, we study the effectiveness of the LSTM- and PLSTM-based architectures for the classification of astronomical light curves. We use seven catalogues containing periodic and non-periodic astronomical objects. Our findings show that LSTM outperformed PLSTM on six of seven data sets. However, the combination of both units enhances the results in all data sets.