Abstract

The global navigation satellite system (GNSS) is presently a powerful tool for sensing the Earth's ionosphere. For this purpose, the ionospheric measurements (IMs), which are by definition slant total electron content biased by satellite and receiver differential code biases (DCBs), need to be first extracted from GNSS data and then used as inputs for further ionospheric representations such as tomography. By using the customary phase-to-code leveling procedure, this research comparatively evaluates the calibration errors on experimental IMs obtained from three GNSS, namely the US Global Positioning System (GPS), the Chinese BeiDou Navigation Satellite System (BDS), and the European Galileo. On the basis of ten days of dual-frequency, triple-GNSS observations collected from eight co-located ground receivers that independently form short-baselines and zero-baselines, the IMs are determined for each receiver for all tracked satellites and then for each satellite differenced for each baseline to evaluate their calibration errors. As first derived from the short-baseline analysis, the effects of calibration errors on IMs range, in total electron content units, from 1.58 to 2.16, 0.70 to 1.87, and 1.13 to 1.56 for GPS, Galileo, and BDS, respectively. Additionally, for short-baseline experiment, it is shown that the code multipath effect accounts for their main budget. Sidereal periodicity is found in single-differenced (SD) IMs for GPS and BDS geostationary satellites, and the correlation of SD IMs over two consecutive days achieves the maximum value when the time tag is around 4 min. Moreover, as byproducts of zero-baseline analysis, daily between-receiver DCBs for GPS are subject to more significant intra-day variations than those for BDS and Galileo.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call