Abstract

We consider the following type of online variance minimization problem: In every trial t our algorithms get a covariance matrix C t and try to select a parameter vector w t?1 such that the total variance over a sequence of trials $\sum_{t=1}^{T} (\boldsymbol {w}^{t-1})^{\top} \boldsymbol {C}^{t}\boldsymbol {w}^{t-1}$ is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces in ? n are considered--the probability simplex and the unit sphere. The first space is associated with the problem of minimizing risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue of the total covariance matrix $\sum_{t=1}^{T} \boldsymbol {C}^{t}$ . For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy regularization. In the second case, the algorithm has to maintain uncertainty information over all unit directions u. For this purpose, directions are represented as dyads uu ? and the uncertainty over all directions as a mixture of dyads which is a density matrix. The motivating divergence for density matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated Gradient algorithm. In each of the two cases we prove bounds on the additional total variance incurred by the online algorithm over the best offline parameter.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call