Abstract

AbstractFisher's classical likelihood has become the standard procedure to make inference for fixed unknown parameters. Recently, inferences of unobservable random variables, such as random effects, factors, missing values, etc., have become important in statistical analysis. Because Fisher's likelihood cannot have such unobservable random variables, the full Bayesian method is only available for inference. An alternative likelihood approach is proposed by Lee and Nelder. In the context of Fisher likelihood, the likelihood principle means that the likelihood function carries all relevant information regarding the fixed unknown parameters. Bjørnstad extended the likelihood principle to extended likelihood principle; all information in the observed data for fixed unknown parameters and unobservables are in the extended likelihood, such as the h‐likelihood. However, it turns out that the use of extended likelihood for inferences is not as straightforward as the Fisher likelihood. In this paper, we describe how to extract information of the data from the h‐likelihood. This provides a new way of statistical inferences in entire fields of statistical science.This article is categorized under: Statistical Models > Generalized Linear Models Algorithms and Computational Methods > Maximum Likelihood Methods

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call