Abstract. For 2 decades, meteor radars have been routinely used to monitor atmospheric temperature around 90 km altitude. A common method, based on a temperature gradient model, is to use the height dependence of meteor decay time to obtain a height-averaged temperature in the peak meteor region. Traditionally this is done by fitting a linear regression model in the scattered plot of log10(1/τ) and height, where τ is the half-amplitude decay time of the received signal. However, this method was found to be consistently biasing the slope estimate. The consequence of such a bias is that it produces a systematic offset in the estimated temperature, thus requiring calibration with other co-located measurements. The main reason for such a biasing effect is thought to be due to the failure of the classical regression model to take into account the measurement error in τ and the observed height. This is further complicated by the presence of various geophysical effects in the data, as well as observational limitation in the measuring instruments. To incorporate various error terms in the statistical model, an appropriate regression analysis for these data is the errors-in-variables model. An initial estimate of the slope parameter is obtained by assuming symmetric error variances in normalised height and log10(1/τ). This solution is found to be a good prior estimate for the core of this bivariate distribution. Further improvement is achieved by defining density contours of this bivariate distribution and restricting the data selection process within higher contour levels. With this solution, meteor radar temperatures can be obtained independently without needing any external calibration procedure. When compared with co-located lidar measurements, the systematic offset in the estimated temperature is shown to have reduced to 5 % or better on average.