Abstract

A self-organizing map (SOM) is a classical neural network method for dimensionality reduction. It comes under the unsupervised class. SOM is a neural network that is trained using unsupervised learning to produce a low-dimensional, discretized representation of the input space of the training samples, called a map. SOM uses a neighborhood function to preserve the topological properties of the input space. SOM operates in two modes: training and mapping. Using the input examples, training builds the map. It is also called as vector quantization. In this paper, we first survey related dimension reduction methods and then examine their capabilities for face recognition. In this work, different dimensionality reduction techniques such as Principal component analysis [PCA], independent component analysis [ICA] and self-organizing map [SOM] are selected and applied in order to reduce the loss of classification performance due to changes in facial expression. The experiments were conducted on ORL face database and the results show that SOM is a better technique.

Highlights

  • Biometrics refers to the study of methods for uniquely recognizing human based upon one or more intrinsic or behavioral characteristics

  • Biometrics is used to identify the input sample when compared to a template used in cases to identify specific people by certain characteristics

  • Face recognition can benefit areas of airport security, access control, driver’s license, passports; homeland defense, customs and immigration etc. face recognition has been a research area for almost 30 years with significant increased research activity since 1990. This has resulted in successful algorithms and the introduction of commercial products

Read more

Summary

INTRODUCTION

Biometrics refers to the study of methods for uniquely recognizing human based upon one or more intrinsic or behavioral characteristics. Face recognition has been a research area for almost 30 years with significant increased research activity since 1990. This has resulted in successful algorithms and the introduction of commercial products. Suppose we have a collection of points of n-dimensional real vectors drawn from an unknown probability distribution but the situation in most of the cases is where dimensions are very large. This leads one to the methods of dimensionality reduction that allows one to represent data in lower dimension space. A principal component can be defined as a linear combination of optimally-weighted observed variables

PRINCIPAL COMPONENT ANALYSIS
Classifier
INDEPENDENT COMPONENT ANALYSIS
SELF-ORGANIZING MAP
TRAINING AND TEST DATA
EXPERIMENTATION
Method
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call