Abstract
AbstractSparse representation plays an important role in the research of face recognition. As a deformable sample classification task, face recognition is often used to test the performance of classification algorithms. In face recognition, differences in expression, angle, posture, and lighting conditions have become key factors that affect recognition accuracy. Essentially, there may be significant differences between different image samples of the same face, which makes image classification very difficult. Therefore, how to build a robust virtual image representation becomes a vital issue. To solve the above problems, this paper proposes a novel image classification algorithm. First, to better retain the global features and contour information of the original sample, the algorithm uses an improved non‐linear image representation method to highlight the low‐intensity and high‐intensity pixels of the original training sample, thus generating a virtual sample. Second, by the principle of sparse representation, the linear expression coefficients of the original sample and the virtual sample can be calculated, respectively. After obtaining these two types of coefficients, calculate the distances between the original sample and the test sample and the distance between the virtual sample and the test sample. These two distances are converted into distance scores. Finally, a simple and effective weight fusion scheme is adopted to fuse the classification scores of the original image and the virtual image. The fused score will determine the final classification result. The experimental results show that the proposed method outperforms other typical sparse representation classification methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.