The current rain removal techniques proposed for surveillance videos mainly assume consistent rains with invariant extents and types, and are implemented in a batch-mode learning manner. Such assumption deviates from the continuously varying insights of practical rains, and the batch mode further makes the techniques infeasible for real long-lasting videos. To alleviate these issues, this study proposes a novel online rain removal approach to represent practical dynamic rains embedded in surveillance videos. Particularly, we model the rain streaks scattered in each video frame as a patch-wise mixture of Gaussians (P-MoG) distribution, and update its parameters frame by frame. Such a P-MoG modeling manner finely reflects the non-i.i.d. dynamic variations of rains along time. In specific, the P-MoG rain model in each frame is regularized by the learned rain knowledge in previous frames, making the online model adaptable to not-identically-distributed rains in each frame while regularized by not-independently-distributed rains in previous frames. The proposed model is formulated as a concise probabilistic MAP model, which can be readily solved by EM algorithm. We further embed an affine transformation operator into the proposed model, making it adaptable to a wider range of videos with camera jitters. The superiority of the proposed method is substantiated by extensive experiments implemented on synthetic and real videos containing static and dynamic rains as compared with state-of-the-arts in both accuracy and efficiency.