Abstract

In the field of pattern recognition, feature extraction plays an important role prior to classification in order to filter out the background noise, reduce the dimensionality for input and so on. Fisher Linear Discriminant Analysis (FLDA) is well-known as one of the most famous feature extraction methods. In recent years, FLDA has been improved in various ways because an eigenspace is learned faster and/or the classification performance is improved. Simple-FLDA (SFLDA) has been proposed to speed up the learning by improving FLDA algorithm. However, the above methods are calculated in input space. Thus, it might not be efficient in cases where data distribution is complex. Then, Simple Kernel Discriminant Analysis (SKDA), which is an improved version of Kernel Discriminant Analysis (KDA), has been proposed to acquire a better performance for classification by applying kernel trick. Whereas a better performance is acquired by SKDA than that by SFLDA, its learning speed has increased instead. In this paper, an additional improvement is applied to SKDA algorithm and the improved version of SKDA (SIKDA) is introduced. The performance of SIKDA is as same as that of SKDA. In addition, learning speed has become faster than that by SKDA. These are shown in the experiment, especially, the influence of proposed method has seen in a specified dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.