Abstract

In recent years, convolutional neural networks (CNNs) have been successfully applied to the task of exoplanet detection. The success of CNNs in many domains has been attributed to overparameterization. In the domain of image recognition, larger CNNs with more free parameters tend to result in better predictions. In exoplanet detection, the reverse seems to apply. Astronet and Exonet, the first CNNs for exoplanet detection had more than 10 million parameters. Exonet-XS is a much smaller network (0.85 million parameters) that uses domain knowledge and performs almost on par with its larger variant Exonet. In earlier work we proposed Genesis, an even smaller network with less than half of the parameters of Exonet-XS (0.39 million). In this paper we examine and compare the performances of Genesis, Exonet-XS, and Exonet (all of which include stellar parameters as domain knowledge) using a proper cross-validation procedure. We find that Genesis, despite its smaller size, outperforms Exonet-XS and performs at a level comparable with Exonet. Hence, reducing overparameterization seems to be beneficial to exoplanet detection from folded light curves. In an attempt to uncover the cause of Genesis’ success, we study the effect of varying the hyperparameters controlling the convolutional and pooling layers. We find that the performance of Genesis can be explained by its sparsity and use of large convolution filters and pooling strides. We conclude that Genesis and the use of large filters and pooling strides are to be preferred for exoplanet detection over previously-proposed architectures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call