Abstract

Introduction In this chapter we will consider comparison of some of the most familiar linear models. Here ‘linear’ means essentially that the expectations of the observations depend linearly on the unknown parameter. The main emphasis will be on linear models with covariances which are either completely known or are known up to a common positive factor. The observations which realize our models may not be normally distributed. However, if we assume they are normally distributed, the conclusions may be drastically strengthened. In this case the Hellinger transform is a very useful tool. As we do not wish to stretch the generality too far, we shall restrict our attention to models where the dimensions of the parameter spaces and the sample spaces are finite. We shall find it convenient to use a coordinate-free approach and we shall assume that the underlying spaces are finite dimensional inner product spaces, i.e. finite dimensional Hilbert spaces. One advantage of using this framework rather than the ℝ n -approach is that the various related linear spaces need not be re-parametrized in order to be in the appropriate form. This framework also appears to make it easier to envisage generalizations to infinite dimensional spaces. Before proceeding we should state some basic facts on random vectors. Firstly a finite dimensional inner product space, as any metric space, has a natural topology. The Borel class is of course the σ-algebra generated by the open sets (closed sets, compact sets) and this is the smallest σ-algebra making the linear functional measurable. Representing the vectors by their coordinates with respect to a fixed orthonormal basis we obtain an isomorphism between H and some space ℝ m equipped with the usual scalar product.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call