Abstract
Consider a random vector $(X,Y)$ and let $m(x)=E(Y|X=x)$. We are interested in testing $H_0:m\in {\cal M}_{\Theta,{\cal G}}=\{\gamma(\cdot,\theta,g):\theta \in \Theta,g\in {\cal G}\}$ for some known function $\gamma$, some compact set $\Theta \subset $IR$^p$ and some function set ${\cal G}$ of real valued functions. Specific examples of this general hypothesis include testing for a parametric regression model, a generalized linear model, a partial linear model, a single index model, but also the selection of explanatory variables can be considered as a special case of this hypothesis. To test this null hypothesis, we make use of the so-called marked empirical process introduced by \citeD and studied by \citeSt for the particular case of parametric regression, in combination with the modern technique of empirical likelihood theory in order to obtain a powerful testing procedure. The asymptotic validity of the proposed test is established, and its finite sample performance is compared with other existing tests by means of a simulation study.
Highlights
AMS 2000 subject classifications: Primary 62E20; secondary 62F03, 62F05, 62F40, 62G08, 62G10
The empirical likelihood is a technique designed to construct a nonparametric likelihood for parameters of interest in a nonparametric or a semiparametric setting with nice properties typical for the parametric likelihood, as for example the Wilks’ theorem and the Bartlett correction recently proved by [1]
There has been some recent interest in goodness-of-fit tests based on the empirical likelihood in the regression context. [9] propose a sieve empirical likelihood test for testing general varying-coefficient regression models. [11] study the properties of the empirical likelihood in the presence of both finite and infinite dimensional nuisance parameters as well as when the data dimension is high. [2], [21], [12] and [5] propose different tests based on empirical likelihood with conditional moment restrictions, including situations with dependent data
Summary
In this paper we will extend the above process to a general framework including a large number of common regression models, and we will make use of empirical likelihood theory in the construction of the test statistic. In that paper the authors give primitive conditions under which the empirical likelihood statistic converges (for a fixed u) Their result can be generalized to the current situation of a process in u. Since condition (C2) assures that supu |T(u) − T (u)| = oP (1), we have the following result concerning the bootstrap approximation of the process W 2(u), u∈U : Theorem 2.2. Dμn(u), respectively, given the data (Xi, Yi) (1 ≤ i ≤ n)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.