Abstract

The estimation of the coefficient matrix in a multivariate response linear regression model is considered in situations where we can observe only strictly increasing transformations of the continuous responses and covariates. It is further assumed that the joint dependence between all the observed variables is characterized by an elliptical copula. Penalized estimators of the coefficient matrix are obtained in a high-dimensional setting by assuming that the coefficient matrix is either element-wise sparse or row-sparse, and by incorporating the precision matrix of the error, which is also assumed to be sparse. Estimation of the copula parameters is achieved by inversion of Kendall’s tau. It is shown that when the true coefficient matrix is row-sparse, the estimator obtained via a group penalty outperforms the one obtained via a simple element-wise penalty. Simulation studies are used to illustrate this fact and the advantage of incorporating the precision matrix of the error when the correlation among the components of the error vector is strong. Moreover, the use of the normal-score rank correlation estimator is revisited in the context of high-dimensional Gaussian copula models. It is shown that this estimator remains as the optimal estimator of the copula correlation matrix in this setting.

Highlights

  • Suppose that a q × 1 response vector Y has been observed on a random sample of subjects and that we wish to determine whether its behavior is influenced by explanatory variables forming a p×1 vector X measured on the same individuals.Y

  • We show that Σn retains its optimality in high-dimensional Gaussian copula models

  • We studied the estimation of the coefficient matrix in an elliptical copula multivariate response regression model

Read more

Summary

Introduction

Suppose that a q × 1 response vector Y has been observed on a random sample of subjects and that we wish to determine whether its behavior is influenced by explanatory variables forming a p×1 vector X measured on the same individuals. The estimation of B∗ in Model (1.2) has recently been considered by Cai and Zhang [4] in the special case where q = 1 and the vector (f (X), g(Y)) is jointly normal and scaled in such a way that its components have unit variance These authors provide a rate-optimal estimation procedure for the vector B∗ which is adaptive to the unknown marginal transformations. These studies generally assume that the components of ε in (1.1) are uncorrelated To avoid this restriction, we proceed as in [35, 43], where this assumption is relaxed by requiring only that the precision matrix Ωεε = Σ−εε is sparse in Model (1.1).

Plan of the paper
Notations and conventions
Model setup and rank-based estimation
The Lasso approach
The Dantzig selector approach
Discussion
Estimation of the precision matrix
Second estimation of the coefficient matrix
Preliminaries
Element-wise sparsity
Row sparsity
The group Lasso approach
The group Dantzig selector approach
Simulation studies
Illustration
Proofs for Section 3
Proofs for Section 5
Conditional moments of Gaussian distributions
Bernstein’s inequality
Reduction to a net
Decoupling the U -statistic
Conversion into an average of sub-Gaussian random variables
Treating the cosine function transformation
Findings
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call