Before improving the control of a process, we must make sure of information coherence obtained from instrument lines or sensors. In fact, this information can be corrupted by errors and can also deviate from the optimum functioning range. The operator must take precautions not to be outside of this range. The detection of errors is used to point out the deviations. The detection, the location, the different error characterizations and the estimation of true values are steps in the data reconciliation problem. Process measurements are subject to two types of errors: (1) random errors generally taken to be independent and gaussian with zero mean, and (2) gross errors which are caused by non-random events such as malfunctioning sensors, instrument biases and inexact process models. Various methods for detection and location of gross errors in process data have been proposed in recent years including the parity space approach, the standardized least-squares residuals approach and the standardized imbalance residuals approach. In this paper these methods are applied to instrument and analytic redundances; the performances of parity space approach are compared to the two approached mentioned above by varying different parameters. We restrict our study to the presence of one, two and three gross errors and we consider that all streams are measured. We also demonstrate the equivalence between the parity space approach and the use of normalized residuals approach. Once the location has been made and the measurements containing gross errors have been deleted, we proceed with data reconciliation.