ABSTRACT In recent years, a Gaussian process regression (GPR)-based framework has been developed for foreground mitigation from data collected by the LOw-Frequency ARray (LOFAR), to measure the 21-cm signal power spectrum from the Epoch of Reionization (EoR) and cosmic dawn. However, it has been noted that through this method there can be a significant amount of signal loss if the EoR signal covariance is misestimated. To obtain better covariance models, we propose to use a kernel trained on the grizzly simulations using a Variational Auto-Encoder (VAE)-based algorithm. In this work, we explore the abilities of this machine learning-based kernel (VAE kernel) used with GPR, by testing it on mock signals from a variety of simulations, exploring noise levels corresponding to ≈10 nights (≈141 h) and ≈100 nights (≈1410 h) of observations with LOFAR. Our work suggests the possibility of successful extraction of the 21-cm signal within 2σ uncertainty in most cases using the VAE kernel, with better recovery of both shape and power than with previously used covariance models. We also explore the role of the excess noise component identified in past applications of GPR and additionally analyse the possibility of redshift dependence on the performance of the VAE kernel. The latter allows us to prepare for future LOFAR observations at a range of redshifts, as well as compare with results from other telescopes.
Read full abstract