Abstract

AbstractFisher's classical likelihood has become the standard procedure to make inference for fixed unknown parameters. Recently, inferences of unobservable random variables, such as random effects, factors, missing values, etc., have become important in statistical analysis. Because Fisher's likelihood cannot have such unobservable random variables, the full Bayesian method is only available for inference. An alternative likelihood approach is proposed by Lee and Nelder. In the context of Fisher likelihood, the likelihood principle means that the likelihood function carries all relevant information regarding the fixed unknown parameters. Bjørnstad extended the likelihood principle to extended likelihood principle; all information in the observed data for fixed unknown parameters and unobservables are in the extended likelihood, such as the h‐likelihood. However, it turns out that the use of extended likelihood for inferences is not as straightforward as the Fisher likelihood. In this paper, we describe how to extract information of the data from the h‐likelihood. This provides a new way of statistical inferences in entire fields of statistical science.This article is categorized under: Statistical Models > Generalized Linear Models Algorithms and Computational Methods > Maximum Likelihood Methods

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.