Abstract

Gamma -ray bursts (GRBs) observed up to redshifts $z>9.4$ can be used as possible probes to test cosmological models. Here we show how changes of the slope of the {\it luminosity $L^*_X$ -break time $T^*_a$} correlation in GRB afterglows, hereafter the LT correlation, affect the determination of the cosmological parameters. With a simulated data set of 101 GRBs with a central value of the correlation slope that differs on the intrinsic one by a $5\sigma$ factor, we find an overstimated value of the matter density parameter, $\Omega_M$, compared to the value obtained with SNe Ia, while the Hubble constant, $H_0$, best fit value is still compatible in 1$\sigma$ compared to other probes. We show that this compatibility of $H_0$ is due to the large intrinsic scatter associated with the simulated sample. Instead, if we consider a subsample of high luminous GRBs ($HighL$), we find that both the evaluation of $H_0$ and $\Omega_M$ are not more compatible in 1$\sigma$ and $\Omega_M$ is underestimated by the $13\%$. However, the $HighL$ sample choice reduces dramatically the intrinsic scatter of the correlation, thus possibly identifying this sample as the standard canonical `GRBs' confirming previous results presented in Dainotti et al. (2010,2011). Here, we consider the LT correlation as an example, but this reasoning can be extended also for all other GRB correlations. In literature so far GRB correlations are not corrected for redshift evolution and selection biases, therefore we are not aware of their intrinsic slopes and consequently how far the use of the observed correlations can influence the derived `best' cosmological settings. Therefore, we conclude that any approach that involves cosmology should take into consideration only intrinsic correlations not the observed ones.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call