Abstract
Abstract Because they require very little storage and can be computationally quite efficient, gradient algorithms are attractive methods for fitting large nonorthogonal analysis of variance (ANOVA) models. A coordinate-free approach is used to provide very simple definitions for a number of well-known gradient algorithms and insights into their similarities and differences. The key to finding a good algorithm is finding an algorithm metric that leads to easily computed gradients and that is as close as possible to the metric defined by the ANOVA problem. This leads to the proposal of a new class of algorithms based on a proportional subclass metric. Several new theoretical results on convergence are derived, and some empirical comparisons are made. A similar, but much briefer, treatment of analysis of covariance is given. On theoretical convergence of the methods it is shown, for example, that the Golub and Nash (1982) algorithm requires at most d + 1 iterations if all but d of the cells in the model have...
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.