ABSTRACTThis article compares the information content of a sample for two competing Bayesian approaches. One approach follows Dennis Lindley's Bayesian standpoint, where one begins by formulating a prior for a parameter related to the problem in question and incorporates a likelihood to transition to a posterior. This contrasts with the usual Bayesian approach, where one starts with a likelihood model, formulates a prior distribution for its parameters, and derives the corresponding posterior. In both cases, the sample information content is measured using the difference between the prior and posterior entropies. We investigate this contrast in the context of learning about the moments of a variable. The maximum entropy principle is used to construct the likelihood model consistent with the given moment parameters. This likelihood model is then combined with the prior information on the parameters to derive the posterior. The model parameters are the Lagrange multipliers for the moment constraints. A prior for the moments induces a prior for the model parameters; however, the data provides differing amounts of information about them. The results obtained for several problems show that the information content using the two formulations can differ significantly. Additional information measures are derived to assess the effects of operating environments on the lifetimes of system components.
Read full abstract