Abstract

Maximum entropy inference is a method for inferring an unknown probability distribution from a set of moments of that distribution in such a way that all information contained in the set is maximally utilized. This article presents a maximum entropy interpretation of the decision bound models and the context model of categorization. For the decision bound models it is shown that several forms of decision bound can be derived as maximum entropy solutions based on relatively limited information on category structure. For the context model it is shown that a maximum entropy inference model is asymptotically equivalent to the context model under some restrictive conditions on the category exemplar distribution and similarity parameters of the model. The maximum entropy inference model, however, does not require the storage of exemplars in memory as does the context model. Maximum entropy inference also provides a theoretical justification for similarity rules of the context model, namely, the multiplicative similarity rule for binary features and the Gaussian-Euclidean- and the Exponential-city-block-similarity rules for continuous features.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.