Abstract

Context.Current models of galaxy evolution are constrained by the analysis of catalogs containing the flux and size of galaxies extracted from multiband deep fields. However, these catalogs contain inevitable observational and extraction-related biases that can be highly correlated. In practice, taking all of these effects simultaneously into account is difficult, and therefore the derived models are inevitably biased as well.Aims.To address this issue, we use robust likelihood-free methods to infer luminosity function parameters, which is made possible by the massive compression of multiband images using artificial neural networks. This technique makes the use of catalogs unnecessary when observed and simulated multiband deep fields are compared and model parameters are constrained. Because of the efficient data compression, the method is not affected by the required binning of the observables inherent to the use of catalogs.Methods.A forward-modeling approach generates galaxies of multiple types depending on luminosity function parameters rendered on photometric multiband deep fields that include instrumental and observational characteristics. The simulated and the observed images present the same selection effects and can therefore be properly compared. We trained a fully convolutional neural network to extract the most model-parameter-sensitive summary statistics out of these realistic simulations, shrinking the dimensionality of the summary space to the number of parameters in the model. Finally, using the trained network to compress both observed and simulated deep fields, the model parameter values were constrained through population Monte Carlo likelihood-free inference.Results.Using synthetic photometric multiband deep fields similar to previously reported CFHTLS and WIRDS D1/D2 deep fields and massively compressing them through the convolutional neural network, we demonstrate the robustness, accuracy, and consistency of this new catalog-free inference method. We are able to constrain the parameters of luminosity functions of different types of galaxies, and our results are fully compatible with the classic catalog-extraction approaches.

Highlights

  • The study of galaxy evolution is based on the analysis of large sets of photometric surveys with long exposure times and a wide range of bands

  • We use the algorithm called information maximizing neural network (IMNN; Charnock et al 2018), which fits a neural network that is only sensitive to the effects of the model parameters in the simulations that are obtained from our forward model

  • In the past two decades, progress in the field of pattern recognition or classification on images has been tremendous. It was shown in 2004 that standard neural networks can be greatly accelerated by using graphics processing units (GPUs), whose implementation is 20 times faster than the same implementation on central processing units (CPUs; Oh & Jung 2004)

Read more

Summary

Introduction

The study of galaxy evolution is based on the analysis of large sets of photometric surveys with long exposure times and a wide range of bands. In a recent paper, Carassou et al (2017) have developed a method for binning extracted catalogs in fluxes and sizes in order to infer the parameters of the luminosity functions used in their model. We use the algorithm called information maximizing neural network (IMNN; Charnock et al 2018), which fits a neural network that is only sensitive to the effects of the model parameters in the simulations that are obtained from our forward model We implement this method for the first time in deep and large multiwavelength images of galaxies: the 1 deg Canada– France–Hawaii Telescope Legacy Survey (CFHTLS) D1 deep field observed in the optical, using the MegaPrime instrument in the u , g , r , i , z filters, and in the near-infrared (IR) using the WIRCam instrument in the J, H, K s filters.

Basis of the model
Luminosity functions per galaxy type
Bulge component
Internal extinction
Image generation
2.10. Data conditioning
Milky Way reddening
Compression through neural networks
Fisher information
Gaussian likelihood function
Inception network
Loss function
Training of the network
Choice of fiducial values
Description
Training the network
ABC posteriors
PMC posteriors
Observed and virtual data
PMC posteriors and confidence intervals
Joint posterior and confidence intervals
Comparison with other studies
Conclusions and perspectives

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.