Abstract

Monotonically convergent algorithms are described for maximizing six (constrained) functions of vectors x, or matrices X with columns x(1),..., x(r). These functions are h(1)(x) = Sigma(k) (x'A(k)x)(x'C(k)x)(-1), H-1(X) = Sigma(k) tr (X'A(k)X)(X'C(k)X)(-1), (h) over tilde(1)(X) = Sigma(k) Sigma(l)(x'(l)A(k)x(l))(x'(l)C(k)x(l))(-1) with X constrained to be columnwise orthonormal, h(2)(x) = Sigma(k) (x'A(k)x)(2)(x'C(k)X)(-1) subject to x'x = 1, H-2(X) = Sigma(k) tr (X'A(k)X)(X'A(k)X)'(X'C(k)X)(-1) subject to X'X = I, and (h) over tilde(2)(X) = Sigma(k) Sigma(l) (x'(l)A(k)x(l))(2)(x'(l)C(k)x(l))(-1) subject to X'X = I. In these functions the matrices C-k are assumed to be positive definite. The matrices A(k) can be arbitrary square matrices. The general formulation of the functions and the algorithms allows for application of the algorithms in various problems that arise in multivariate analysis. Several applications of the general algorithms are given. Specifically, algorithms are given for reciprocal principal components analysis, binormamin rotation, generalized discriminant analysis, variants of generalized principal components analysis, simple structure rotation for one of the latter variants, and set component analysis. For most of these methods the algorithms appear to be new, for the others the existing algorithms turn out to be special cases of the newly derived general algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call