Abstract

We use the theory of normal variance-mean mixtures to derive a data-augmentation scheme that unies a wide class of statistical models under a single framework. This generalizes existing theory on normal variance mixtures for priors in regression and classication. It also allows variants of the expectation-maximization algorithm to be brought to bear on a much wider range of models than previously appreciated. We demonstate the resulting gains in accuracy and stability on several examples, including sparse quantile regression and binary logistic regression.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call