Abstract

이 논문은 고차원의 데이터를 저 차원으로 줄이는 방법 중 하나인 특징추출에 대한 방법들의 특성을 비교한다. 비교대상 방법은 전통적인 PCA(Principal Component Analysis)방법과 시각피질의 특성을 보인다고 알려진 ICA(Independent Component Analysis), 국소기반인식을 구현한 NMF(Non-negative Matrix Factorization), 그리고 이의 성능을 개선한 sNMF(Sparse NMF)로 정하였다. 추출된 특징들의 특성을 시각적으로 확인하기 위하여 필기체 숫자 영상을 대상으로 특징추출을 수행하였으며, 인식기에 적용한 효과의 확인을 위하여 추출된 특징을 다층퍼셉트론에 학습시켜보았다. 각 방법의 특성을 비교한 결과는 응용하고자 하는 문제에서 어떤 특징을 추출하기 원하느냐에 따라 특징추출 방법을 선정할 때 유용할 것이다. In this paper, feature extraction methods, which is one field of reducing dimensions of high-dimensional data, are empirically investigated. We selected the traditional PCA(Principal Component Analysis), ICA(Independent Component Analysis), NMF(Non-negative Matrix Factorization), and sNMF(Sparse NMF) for comparisons. ICA has a similar feature with the simple cell of V1. NMF implemented a "parts-based representation in the brain" and sNMF is a improved version of NMF. In order to visually investigate the extracted features, handwritten digits are handled. Also, the extracted features are used to train multi-layer perceptrons for recognition test. The characteristic of each feature extraction method will be useful when applying feature extraction methods to many real-world problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.