Abstract

The bias in the determination of the Hubble parameter and the Hubble constant in the modern Universe is discussed. It could appear due to the statistical processing of data on the redshifts of galaxies and the estimated distances based on some statistical relations with limited accuracy. This causes a number of effects leading to either underestimation or overestimation of the Hubble parameter when using any methods of statistical processing, primarily the least squares method (LSM). The value of the Hubble constant is underestimated when processing a whole sample; when the sample is constrained by distance, especially when constrained from above. Moreover, it is significantly overestimated due to the data selection. The bias significantly exceeds the values of the erro ofr the Hubble constant calculated by the LSM formulae. These effects are demonstrated both analytically and using Monte Carlo simulations, which introduce deviations in the velocities and estimated distances to the original dataset described by the Hubble law. The characteristics of the deviations are similar to real observations. Errors in the estimated distances are up to 20%. They lead to the fact that, when processing the same mock sample using LSM, it is possible to obtain an estimate of the Hubble constant from 96% of the true value when processing the entire sample to 110% when processing the subsample with distances limited from above. The impact of these effects can lead to a bias in the Hubble constant obtained from real data and an overestimation of the accuracy of determining this value. This may call into question the accuracy of determining the Hubble constant and can significantly reduce the tension between the values obtained from the observations in the early and modern Universes, which were actively discussed during the last year.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call