Real time resource usage prediction is an important part of resource provisioning in a cloud data centre. As cloud workloads vary dynamically, effective resource provisioning requires prediction of future resource usage trends. The problem is highly complicated because of highly time varying nature of cloud resource workloads. Training the future resource usage prediction models once, using a fixed set of observations is not sufficient to capture the variability in cloud workloads. In this work, we propose to use gradient descent (GD) and Levenberg-Marquardt (LM) adaptation algorithms for dynamic adaptation of resource utilization prediction models. We also propose a novel sparse framework for fast online adaptation of resource usage prediction models. We propose to analyze different algorithms such as ℓ <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> regularization, ℓ <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> regularization, optimal brain damage (OBD), optimal brain surgeon (OBS) for introducing sparsity. The proposed sparse framework for online adaptation of multivariate resource usage prediction models is validated for CPU usage prediction in the Google cluster trace and PlanetLab workload trace. A comparative analysis of different sparse frameworks shows that OBD-based LM adaptation algorithm performs better than other frameworks for online multivariate resource usage prediction in a cloud.