Abstract

The low-rank representation (LRR) method has recently gained enormous popularity due to its robust approach in solving the subspace segmentation problem, particularly those concerning corrupted data. In this paper, the recursive sample scaling low-rank representation (RSS-LRR) method is proposed. The advantage of RSS-LRR over traditional LRR is that a cosine scaling factor is further introduced, which imposes a penalty on each sample to minimize noise and outlier influence better. Specifically, the cosine scaling factor is a similarity measure learned to extract each sample’s relationship with the low-rank representation’s principal components in the feature space. In order words, the smaller the angle between an individual data sample and the low-rank representation’s principal components, the more likely it is that the data sample is clean. Thus, the proposed method can then effectively obtain a good low-rank representation influenced mainly by clean data. Several experiments are performed with varying levels of corruption on ORL, CMU PIE, COIL20, COIL100, and LFW in order to evaluate RSS-LRR’s effectiveness over state-of-the-art low-rank methods. The experimental results show that RSS-LRR consistently performs better than the compared methods in image clustering and classification tasks.

Highlights

  • While RSS-low-rank representation (LRR)’s performance of 0.6052 on the COIL100 dataset corrupted with 6 × 6 block occlusion (Table 8) is slightly better than that of the secondbest GODEC+ by over 1%, it is far better by over 2% under 8 × 8 block occlusion

  • Similar results are obtained on the Labeled Faces in the Wild (LFW) dataset (Table 9), where the proposed method’s performance is better than that of the GODEC+ method, which follows closely, except that more margin of over 4% is obtained under 6 × 6 block occlusion

  • Its performance on COIL100 under 0% noise is merely 1% better than that of its closest competitor GODEC+, but it is over 2% better under 20% noise. e same can be said on the LFW dataset, where recursive sample scaling lowrank representation (RSS-LRR)’s clustering accuracy is only about 2% better than that of LRR on clean data, whereas it is more than 4% better than that of GODEC+, which is the closest result under the 20% noise level

Read more

Summary

Introduction

Is experiment is performed using surveillance video with various illumination settings. It is composed of a chain of 200 grayscale frames of 32 × 32 dimensions. Us, each algorithm’s effectiveness is evaluated using precision, recall, and F-score metrics, and their parameters are tuned according to the corresponding literature. Background modeling [44] is measured by manually quoting out the activities. In this experiment, 50% of frames are randomly selected as the training set, while the remaining are treated as the testing set

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call