Abstract

Relative entropy minimization has been proposed as an inference method for problems with information in the form of constraints on the underlying probability model. We provide a theoretical justification for this procedure through a correspondence principle. In particular, for a convex constraints set Λ, we show that as the number of trials increases, the empirical distribution constrained to lie within Λ and associated with a discrete probability distribution p will become arbitrarily close with high probability to the distribution that minimizes the relative entropy between p and Λ.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call