Abstract

In this paper, we study the nonparametric linear model, when the error process is a dependent Gaussian process. We focus on the estimation of the mean vector via a model selection approach. We first give the general theoretical form of the penalty function, ensuring that the penalized estimator among a collection of models satisfies an oracle inequality. Then we derive a penalty shape involving the spectral radius of the covariance matrix of the errors, which can be chosen proportional to the dimension when the error process is stationary and short range dependent. However, this penalty can be too rough in some cases, in particular when the error process is long range dependent. In a second part, we focus on the fixed-design regression model assuming that the error process is a stationary Gaussian process. We propose a model selection procedure in order to estimate the mean function via piecewise polynomials on a regular partition, when the error process is either short range dependent, long range dependent or anti-persistent. We present different kinds of penalties, depending on the memory of the process. For each case, an adaptive estimator is built, and the rates of convergence are computed. Thanks to several sets of simulations, we study the performance of these different penalties for all types of errors (short memory, long memory and anti-persistent errors). Finally, we give an application of our method to the well-known Nile data, which clearly shows that the type of dependence of the error process must be taken into account.

Highlights

  • With a model selection approach, in the general framework where the error process ε is a dependent Gaussian random vector, with covariance matrix Σ

  • Our first goal is to give the theoretical form of the penalty function, depending on Σ, ensuring that the penalized estimator among a collection of models satisfies an oracle inequality

  • Where · n denotes the euclidean norm in Rn, and pen : M → R+ is a penalty function defined on the family of models

Read more

Summary

Introduction

With a model selection approach, in the general framework where the error process ε is a dependent Gaussian random vector, with covariance matrix Σ. Our first goal is to give the theoretical form of the penalty function, depending on Σ, ensuring that the penalized estimator among a collection of models satisfies an oracle inequality. This model has been widely studied for independent and identically distributed (i.i.d.) errors, in particular by Birge and Massart in the Gaussian case [10]. Following the Birge and Massart approach, we derive a penalty function which provides an oracle inequality for the model selection procedure in the dependent Gaussian framework

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call