Many distributions in multivariate analysis can be expressed in a form involving hypergeometric functions $_pF_q$ of matrix argument e.g. the noncentral Wishart $(_0F_1)$ and the noncentral multivariate $F(_1F_1)$. For an exposition of distributions in this form see James [9]. The hypergeometric function $_pF_q$ has been defined by Constantine [1] as the power series representation \begin{equation*}\tag{1.1} _pF_q(a_1,\cdots, a_p; b_1,\cdots, b_q; R) = \sum^\infty_{k=0} \sum_\kappa \frac{(a_1)_\kappa\cdots(a_p)_\kappa}{(b_1)_\kappa\cdots (b_q)_\kappa} \frac{C_\kappa (R)}{k!}\end{equation*} where $a_1,\cdots, a_p, b_1,\cdots, b_q$ are real or complex constants, $(a)_\kappa = \mathbf{prod}^m_{i=1}(a - \frac{1}{2}(i - 1))_{k_i},\quad (a)_n = a(a + 1)\cdots (a + n - 1)$ and $C_\kappa(R)$ is the zonal polynomial of the $m \times m$ symmetric matrix $R$ corresponding to the partition $\kappa = (k_1, k_2,\cdots, k_m), k_1 \geqq k_2 \geqq \cdots \geqq k_m$, of the integer $k$ into not more than $m$ parts. The functions defined by (1.1) are identical with the hypergeometric functions defined by Herz [5] by means of Laplace and inverse Laplace transforms. For a detailed discussion of hypergeometric functions and zonal polynomials, the reader is referred to the papers [1] of Constantine and [7], [8], [9] of James. From a practical point of view, however, the series (1.1) may not be of great value. Although computer programs have been developed for calculating zonal polynomials up to quite high order, the series (1.1) may converge very slowly. It appears that some asymptotic expansions for such functions must be obtained. It is well known that asymptotic expansions for a function can in many cases be derived using a differential equation satisfied by the function (see e.g. Erdelyi [4]), and so, with this in mind, a study of differential equations satisfied by certain hypergeometric functions certainly seems justified. In this paper a conjecture due to A. G. Constantine is verified i.e. it is shown that the function \begin{equation*}\tag{1.2} _2F_1(a, b; c; R) = \sum^\infty_{k=0} \sum_\kappa \frac{(a)_\kappa(b)_\kappa}{(c)_\kappa} \frac{C_\kappa(R)}{K!}\end{equation*} satisfies the system of partial differential equations \begin{align*} \tag{1.3} R_i(1 &- R_i)\partial^2F/\partial R_i^2 + \{ c - \frac{1}{2}(m - 1) - (a + b + 1 - \frac{1}{2}(m - 1))R_i \\ &+\frac{1}{2} \sum^m_{j=1,j\neq i}\lbrack R_i(1 - R_i)/(R_i - R_j) \rbrack\}\partial F/\partial R_i \\ &-\frac{1}{2} \sum^m_{j=1,j\neq i} \lbrack R_j(1 - R_j)/(R_i - R_j) \rbrack\partial F/\partial R_j = abF \quad (i = 1,2, \cdots, m)\end{align*} where $R_1, R_2,\cdots, R_m$ are the latent roots of the complex symmetric $m \times m$ matrix $R$. When $m = 1$, the system (1.3) clearly reduces to the classical hypergeometric equation. It appears difficult to establish this conjecture directly, and the method used has necessitated a section devoted to a summary of the argument involved (Section 3). The main result in the paper is summarized in Theorem 3.1 of this section. Section 4 contains proofs referred to in Section 3. Using the fact that $C_\kappa (R)$ satisfies the partial differential equation (James [10])\begin{equation*}\tag{1.4} \sum^m_{i=1} R_i^2\partial^2y/\partial R_i^2 + \sum^m_{j=1,j\neq i}\lbrack R_i^2/(R_i - R_j) \rbrack\partial y/\partial R_i = \sum^m_{i=1}k_i(k_i + m - i - 1)y,\end{equation*} James and Constantine [11] have obtained the effects of certain differential operators on $C_\kappa(R)$. These results are given in Section 2 and are used in many proofs in Section 4. Section 5 is probably of most interest statistically, for here systems of partial differential equations similar to (1.3) are given for $_1F_1(a; c; R)$ and $_0F_1(c; R)$. These two functions occur often in multivariate distributions. The differential equations for $_1F_1(a; c; R)$ have been used by Constantine [3] to obtain an asymptotic expansion for the noncentral likelihood ratio criterion, and by the author [12] to obtain asymptotic distributions of Hotelling's generalized $T_0^2$ statistic, Pillai's $V^{(m)}$ criterion, and for the largest latent root of the covariance matrix. The system for $_0F_1(c; R)$ is a generalization of that given by James [6] for $_0F_1(m/2; R)$.
Read full abstract