Abstract

Many key elements that determine RELR's probability estimates, which are consistent with both Bayesian and frequentist views of probability, were introduced in this chapter. RELR's Bayesian online learning and memory abilities were introduced as a special case of the KL minimum divergence subject to constraints formulation that allows prior probabilities to influence new learning. This minimum KL divergence method generalizes RELR's maximum entropy and maximum likelihood methods to situations where prior probabilities are not everywhere equal. RELR's minimal KL divergence online learning has a memory for previous historical observation episodes which is stored in the prior distribution q or equivalent prior parameters. Online learning which includes a memory for past observations is an entirely unique aspect of RELR that arises because of its error modeling, and is not a property that is found standard logistic regression. Because of such memory for previous observations in longitudinal data, RELR avoids the need for complex methods with difficult assumptions to model correlated observations like GEE or fixed and random effects modeling. RELR's ability to handle categorical variables as standardized dummy-coded features that code all levels of a categorical variable was also introduced. This is also an entirely unique RELR ability that arises because of the error modeling and ability to overcome multicollinearity, which also will prove to be useful especially with correlated observations. RELR's feature reduction based upon magnitude of t-values was reviewed, as well as its two feature selection methods—Implicit and Explicit RELR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call