Abstract

In this paper, we investigate the asymptotic distributions of two types of Mahalanobis distance (MD): leave-one-out MD and classical MD with both Gaussian- and non-Gaussian-distributed complex random vectors, when the sample size n and the dimension of variables p increase under a fixed ratio c=p/n→∞. We investigate the distributional properties of complex MD when the random samples are independent, but not necessarily identically distributed. Some results regarding the F-matrix F=S2−1S1—the product of a sample covariance matrix S1 (from the independent variable array (be(Zi)1×n) with the inverse of another covariance matrix S2 (from the independent variable array (Zj≠i)p×n)—are used to develop the asymptotic distributions of MDs. We generalize the F-matrix results so that the independence between the two components S1 and S2 of the F-matrix is not required.

Highlights

  • Mahalanobis distance (MD) is a fundamental statistic in multivariate analysis

  • MD has been applied as a distance metric in various research areas, including propensity score analysis, as well as applications in matching [2], classification and discriminant analysis [3]

  • The applications of MD are related to multivariate calibration [4], psychological analysis [5] and the construction of multivariate process control charts [6]; it is a standard method to assess the similarity between the observations

Read more

Summary

Introduction

Mahalanobis distance (MD) is a fundamental statistic in multivariate analysis. It is used to measure the distance between two random vectors or the distance between a random vector and its center of distribution. The MIMO channel can be represented by an nr × nt complex random matrix H with corresponding covariance matrices. The boundaries calculated by the Euclidean distance are straight lines which misclassify the observations that diverge far from their cluster center, while the boundaries based on the MDs are curves that fit better with the covariance of the cluster This example implies that the MD is more accurate with regards to the measure of dissimilarity for image segmentation. It can be extended to the application of clustering when the channel signals are complex random variables

Preliminaries
MD on Complex Random Variables
Summary and Conclusions
Methods
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call