Abstract

Context. Determining the radial positions of galaxies up to a high accuracy depends on the correct identification of salient features in their spectra. Classical techniques for spectroscopic redshift estimation make use of template matching with cross-correlation. These templates are usually constructed from empirical spectra or simulations based on the modeling of local galaxies. Aims. We propose two new spectroscopic redshift estimation schemes based on new learning techniques for galaxy spectra representation, using either a dictionary learning technique for sparse representation or denoising autoencoders. We investigate how these representations impact redshift estimation. Methods. We first explored dictionary learning to obtain a sparse representation of the rest-frame galaxy spectra modeling both the continuum and line emissions. As an alternative, denoising autoencoders were considered to learn non-linear representations from rest-frame emission lines extracted from the data. In both cases, the redshift was then determined by redshifting the learnt representation and selecting the redshift that gave the lowest approximation error among the tested values. Results. These methods have been tested on realistic simulated galaxy spectra, with photometry modeled after the Large Synoptic Survey Telescope (LSST) and spectroscopy reproducing properties of the Sloan Digital Sky Survey (SDSS). They were compared to Darth Fader, a robust technique extracting line features and estimating redshift through eigentemplates cross-correlations. We show that both dictionary learning and denoising autoencoders provide improved accuracy and reliability across all signal-to-noise (S/N) regimes and galaxy types. Furthermore, the former is more robust at high noise levels; the latter is more accurate on high S/N regimes. Combining both estimators improves results at low S/N. Conclusions. The representation learning framework for spectroscopic redshift analysis introduced in this work offers high performance in feature extraction and redshift estimation, improving on a classical eigentemplates approach. This is a necessity for next-generation galaxy surveys, and we demonstrate a successful application in realistic simulated survey data.

Highlights

  • Galaxy redshift surveys are among the main observational tools to probe cosmological models

  • Redshift estimation with dictionary learning The first learning technique we propose for redshift estimation relies on learning a representation for the full galaxy spectrum with a dictionary learning approach, assuming that spectra can be sparsely decomposed in such a dictionary

  • In this paper, we introduced two new methods of spectroscopic redshift estimation, and benchmarked them on simulated data against a reference method based on line feature estimation and cross-correlation with eigentemplates

Read more

Summary

Introduction

Galaxy redshift surveys are among the main observational tools to probe cosmological models. Contrary to methods relying on PCA (e.g., Glazebrook et al 1998; Machado et al 2013) where template information is compressed in several orthogonal eigentemplates learnt from data or simulations, these techniques rather learn correlated templates assuming the observed spectra can be sparsely represented in a dictionary obtained from the data Such techniques are adapted to learn features (such as combination of emission lines for instance) or different structures in the data (e.g., lines and breaks) that are not necessarily common to all data but representative of a subset of it and are potentially correlated, whereas PCA rather extracts orthogonal features common to all data. The global dictionary is learnt, with a targeted sparsity degree τ given by the sum of the targeted sparsity degrees selected to derive the two sub-dictionaries

6: Update D using MOD
Denoising autoencoders for template representation
Comparison of results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.