With increasing popularity of smart phone cameras and wearable cameras, it is imperative to develop robust vision systems in analyzing videos captured by these freely moving cameras. In this paper, we propose a novel motion and appearance based algorithm for foreground/background segmentation of these videos. Unlike existing methods, it does not require any prior information nor does it restrict camera motion or scene geometry. The proposed algorithm first estimates a dense motion field between two consecutive frames, and obtains a motion-based foreground probability estimate for each pixel by comparing the motion field with its low-rank approximation. In parallel, color features are extracted by sliding a fixed-size neighborhood window over the entire image. Using the motion-based probability estimates, highly probable foreground and background color features are identified and used to learn foreground and background appearance models. These models then generate appearance-based probability estimate for each pixel. To overcome the inaccuracies in appearance modeling and background motion approximation, we incorporate an innovative Mega-pixel denoising process that uses color segmentation to smooth out the probability estimates. Finally, the denoised probability estimates are combined with the image gradient map to produce the output foreground mask under the Graph-Cut optimization framework. To cope with non-stationary dynamic scenes, the foreground and background appearance models are continuously updated with highly probable foreground and background color features. Extensive evaluations on publicly available test sequences show that the proposed technique outperforms six state-of-the-art algorithms.