Abstract

The use of Type~Ia SNe has thus far produced the most reliable measurement of the expansion history of the Universe, suggesting that $\Lambda$CDM offers the best explanation for the redshift--luminosity distribution observed in these events. But the analysis of other kinds of source, such as cosmic chronometers, gamma ray bursts, and high-$z$ quasars, conflicts with this conclusion, indicating instead that the constant expansion rate implied by the $R_{\rm h}=ct$ Universe is a better fit to the data. The central difficulty with the use of Type~Ia SNe as standard candles is that one must optimize three or four nuisance parameters characterizing supernova luminosities simultaneously with the parameters of an expansion model. Hence in comparing competing models, one must reduce the data independently for each. We carry~out such a comparison of $\Lambda$CDM and the $R_{\rm h}=ct$ Universe, using the Supernova Legacy Survey (SNLS) sample of 252 SN~events, and show that each model fits its individually reduced data very well. But since $R_{\rm h}=ct$ has only one free parameter (the Hubble constant), it follows from a standard model selection technique that it is to be preferred over $\Lambda$CDM, the minimalist version of which has three (the Hubble constant, the scaled matter density and either the spatial curvature constant or the dark-energy equation-of-state parameter). We estimate by the Bayes Information Criterion that in a pairwise comparison, the likelihood of $R_{\rm h}=ct$ is $\sim 90\%$, compared with only $\sim 10\%$ for a minimalist form of $\Lambda$CDM, in which dark energy is simply a cosmological constant. Compared to $R_{\rm h}=ct$, versions of the standard model with more elaborate parametrizations of dark energy are judged to be even less likely.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call