Abstract

We consider composite-composite testing problems for the expectation in the Gaussian sequence model where the null hypothesis corresponds to a closed convex subset $\mathcal{C}$ of $\mathbb{R}^{d}$. We adopt a minimax point of view and our primary objective is to describe the smallest Euclidean distance between the null and alternative hypotheses such that there is a test with small total error probability. In particular, we focus on the dependence of this distance on the dimension $d$ and variance $\frac{1}{n}$ giving rise to the minimax separation rate. In this paper we discuss lower and upper bounds on this rate for different smooth and non-smooth choices for $\mathcal{C}$.

Highlights

  • In this paper we consider the problem of testing whether a vector μ ∈ Rd belongs to a closed convex subset C of Rd with d ∈ N, based on a noisy observation X obtained from the Gaussian sequence model with variance scaling parameter n ∈ N, i.e.X = μ + √1, n (1.1)where is a standard Gaussian vector

  • We consider composite-composite testing problems for the expectation in the Gaussian sequence model where the null hypothesis corresponds to a closed convex subset C of Rd

  • In an l2 sense, we aim at finding the order of magnitude of the smallest separation distance ρ > 0 from C such that the testing problem

Read more

Summary

Introduction

In this paper we consider the problem of testing whether a vector μ ∈ Rd belongs to a closed convex subset C of Rd with d ∈ N, based on a noisy observation X obtained from the Gaussian sequence model with variance scaling parameter n ∈ N, i.e. A versatile way of solving such testing problems was introduced in [12], where the authors combine signal detection ideas with a covering of the null hypothesis, for deriving minimax optimal testing procedures for composite-composite testing problems, provided that the null hypothesis is not too large (i.e. that its entropy number is not too large, see Assumption (A3) in [12]). This idea can be generalised to the case where the null hypothesis is “too large” (when Assumption (A3) in [12] is not satisfied); the approach implies that an upper bound on the minimax rate of separation is the sum of the signal detection rate and the optimal estimation rate in the null hypothesis C – see [5] for an illustration of this for a specific convex shape Using this technique, one finds that the smaller the entropy of C, the smaller the separation rate. This property takes us quite far: In particular, given any separation rate satisfying (1.4), it allows for constructing a set C exhibiting this rate up to ln(d)-factors

Setting
A general guarantee and extreme cases
A simple smoothness-type property
Discussion
Techniques for obtaining lower bounds
Concentration properties of Gaussian and χ2 random variables
Frequently used bounds for expressions containing square roots
Proofs for Section 3
Proofs for Section 4
Proofs for Section 5
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.