Abstract

Precision cosmology has recently triggered new attention on the topic of approximate methods for the clustering of matter on large scales, whose foundations date back to the period from the late 1960s to early 1990s. Indeed, although the prospect of reaching sub-percent accuracy in the measurement of clustering poses a challenge even to full N-body simulations, an accurate estimation of the covariance matrix of clustering statistics, not to mention the sampling of parameter space, requires usage of a large number (hundreds in the most favourable cases) of simulated (mock) galaxy catalogs. Combination of few N-body simulations with a large number of realizations performed with approximate methods gives the most promising approach to solve these problems with a reasonable amount of resources. In this paper I review this topic, starting from the foundations of the methods, then going through the pioneering efforts of the 1990s, and finally presenting the latest extensions and a few codes that are now being used in present-generation surveys and thoroughly tested to assess their performance in the context of future surveys.

Highlights

  • The formation of structure in the cosmological ΛCDM model (Cold Dark Matter with a cosmological constant Λ) proceeds through gravitational evolution and collapse of small fluctuations, imprinted at very early times during an inflationary period (e.g., [1,2])

  • Another interesting application of approximate methods is the production of constrained realizations of initial conditions that reproduce, for instance, the local Universe; the Bayesian techniques employed to sample the large parameter space require many forecasts on what the large-scale structure is expected to be for a given initial configuration, and this would lead to painfully long computing times if a proper N-body code were used

  • The ability to understand the fine details of clustering pays off when the “blind” solution of N-body simulations must be demonstrated to be accurate to a certain level, or must be replicated a very large number of times to quantify the systematics of a galaxy survey

Read more

Summary

Introduction

The formation of structure in the cosmological ΛCDM model (Cold Dark Matter with a cosmological constant Λ) proceeds through gravitational evolution and collapse of small fluctuations, imprinted at very early times during an inflationary period (e.g., [1,2]). Approximate methods are used by many astronomers that develop semi-analytic models (SAMs) [31,32,33] for the formation and evolution of galaxies, taking advantage of the flexibility of such methods that make the production of model predictions much smoother in many cases Another interesting application of approximate methods is the production of constrained realizations of initial conditions that reproduce, for instance, the local Universe (see Section 4.1); the Bayesian techniques employed to sample the large parameter space require many forecasts on what the large-scale structure is expected to be for a given initial configuration, and this would lead to painfully long computing times if a proper N-body code were used.

Foundations of Approximate Methods
Perturbation Theories
The Need for Smoothing
Press and Schechter and Its Extensions
Ellipsoidal Collapse
Halo Bias
Approximate Methods in the 1990s
Lognormal Model
Adhesion Theory
Extensions of ZA
Truncated Zeldovich Approximation and Beyond
Reconstruction of Initial Conditions
The age of Precision Cosmology
Recent Development of the Foundations
The Universal Mass Function
Lagrangian Methods to Produce Mock Catalogs
PINOCCHIO
PTHALOS
Methods Based on the Particle-Mesh Scheme
PATCHY
EZmocks
Halogen
Comparison of Methods
Method
The nIFTy Comparison Project
Findings
Concluding Remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call