Abstract Enhanced modeling of microlensing variations in light curves of strongly lensed quasars improves measurements of cosmological time delays, the Hubble Constant, and quasar structure. Traditional methods for modeling extragalactic microlensing rely on computationally expensive magnification map generation. With large data sets expected from wide-field surveys like the Vera C. Rubin Legacy Survey of Space and Time, including thousands of lensed quasars and hundreds of multiply imaged supernovae, faster approaches become essential. We introduce a deep-learning model that is trained on pre-computed magnification maps covering the parameter space on a grid of κ, γ, and s. Our autoencoder creates a low-dimensional latent space representation of these maps, enabling efficient map generation. Quantifying the performance of magnification map generation from a low dimensional space is an essential step in the roadmap to develop neural network-based models that can replace traditional feed-forward simulation at much lower computational costs. We develop metrics to study various aspects of the autoencoder generated maps and show that the reconstruction is reliable. Even though we observe a mild loss of resolution in the generated maps, we find this effect to be smaller than the smoothing effect of convolving the original map with a source of a plausible size for its accretion disk in the red end of the optical spectrum and larger wavelengths and particularly one suitable for studying the broad-line region of quasars. Used to generate large samples of on-demand magnification maps, our model can enable fast modeling of microlensing variability in lensed quasars and supernovae.
Read full abstract