Abstract

SUMMARY An approach to quantifying the amount of information that an experimenter expects to learn from a non-linear model is given. An expected utility for an experiment, ξ, motivated by the asymptotic form of Shannon information gain between prior and posterior, is defined. This leads to a characterization of the experimenter who expects to learn the most from a non-linear model. Such an experimenter has the design-dependent Jeffreys prior. Sufficient regularity conditions for the equivalence with asymptotic Shannon information gain are given. An application to the optimal selection of sample size from a model with exponential family errors and the Michaelis-Menten model is discussed. A link between the regularity conditions for asymptotic posterior normality and the Jeffreys prior is given.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call