Abstract

Human Activity Recognition (HAR) aims to realise and interpret human activities from videos, and it comprises background subtraction, feature extraction and classification stages. Among those stages, the background subtraction stage is mandatory to achieve a better recognition rate while analysing the videos. The proposed Fusion-based Gaussian Mixture Model (FGMM) background subtraction algorithm extracts the foreground from videos invariant to illumination, shadows and the dynamic background. The proposed FGMM algorithm consists of three stages: background detection, colour similarity and colour distortion calculation. Here, the Jefries-Matusita distance measure is utilised to check whether the current pixel matches the Gaussian distribution, and by using this value, the background model is updated. Weighted Euclidean based colour similarity measure is used to eliminate shadows, and colour distortion measure is adopted to handle illumination variations. The extracted foreground is binarised to easily extract the foreground's interest points, which has white pixels stored into the frame. This algorithm has experimented over test sets gathered from publicly available benchmark data sets such as the K-th data set, Weizmann data set, PETS data set and change detection data set. Experimental results have proved that the proposed FGMM exhibits better accuracy in foreground detection, with better accuracy than the prevailing approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call