Abstract
Adaptive Gauss-Hermite quadrature is used for the computation of the log-likelihood function for generalized linear mixed models. The basic first step in this method is to multiply and divide the integrand of interest by a carefully chosen probability density function. The same first step is used for the computation of this log-likelihood function using simulations that employ importance sampling. We compare these two methods by considering in detail a single cluster from a well-known teratology data set that is modelled using a logistic regression with random intercept. We show that while importance sampling fails for this computation, adaptive Gauss-Hermite quadrature does not. We derive a new upper bound on the error of approximation of adaptive Gauss-Hermite quadrature. Using this new upper bound, we show that the feature of this problem that makes importance sampling fail is useful in disclosing why adaptive Gauss-Hermite quadrature succeeds.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.