Abstract

The model averaging problem is to average multiple models to achieve a prediction accuracy not much worse than that of the best single model in terms of mean-squared error. It is known that if the models are misspecified, model averaging is superior to model selection. Specifically, let $n$ be the sample size, then the worst case regret of the former decays at a rate of $O(1/n)$ , whereas the worst case regret of the latter decays at a rate of $O(1/\sqrt {n})$ . The recently proposed $Q$ -aggregation algorithm solves the model averaging problem with the optimal regret of $O(1/n)$ both in expectation and in deviation; however, it suffers from two limitations: 1) for continuous dictionary, the proposed greedy algorithm for solving $Q$ -aggregation is not applicable and 2) the formulation of $Q$ -aggregation appears ad hoc without clear intuition. This paper examines a different approach to model averaging by considering a Bayes estimator for deviation optimal model averaging by using exponentiated least squares loss. We establish a primal-dual relationship of this estimator and that of $Q$ -aggregation and propose new algorithms that satisfactorily resolve the above-mentioned limitations of $Q$ -aggregation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call