Abstract

ABSTRACT Optical spectra contain a wealth of information about the physical properties and formation histories of galaxies. Often though, spectra are too noisy for this information to be accurately retrieved. In this study, we explore how machine learning methods can be used to de-noise spectra and increase the amount of information we can gain without having to turn to sample averaging methods such as spectral stacking. Using machine learning methods trained on noise-added spectra – Sloan Digital Sky Survey (SDSS) spectra with Gaussian noise added – we investigate methods of maximizing the information we can gain from these spectra, in particular from emission lines, such that more detailed analysis can be performed. We produce a variational autoencoder (VAE) model, and apply it on a sample of noise-added spectra. Compared to the flux measured in the original SDSS spectra, the model values are accurate within 0.3–0.5 dex, depending on the specific spectral line and signal-to-noise ratio. Overall, the VAE performs better than a principal component analysis method, in terms of reconstruction loss and accuracy of the recovered line fluxes. To demonstrate the applicability and usefulness of the method in the context of large optical spectroscopy surveys, we simulate a population of spectra with noise similar to that in galaxies at z = 0.1 observed by the Dark Energy Spectroscopic Instrument (DESI). We show that we can recover the shape and scatter of the mass–metallicity relation in this ‘DESI-like’ sample, in a way that is not possible without the VAE-assisted de-noising.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call