Abstract
This paper presents an innovative approach to solve the problem of multiclass classification. One-against-one neural networks are applied to interval neutrosophic sets (INS). INS associates a set of truth, false and indeterminacy membership values with an output. Multiple pairs of the truth binary neural network and the false binary neural network are trained to predict multiple pairs of the truth and false membership values. The difference between each pair of truth and false membership values is considered as vagueness in the classification and formed as the indeterminacy membership value. The three memberships obtained from each pair of networks constitute an interval neutrosophic set. Multiple interval neutrosophic sets are then created and used to support decision making in multiclass classification. We have applied our technique to three classical benchmark problems including balance, wine, and yeast from the UCI machine learning repository. Our approach has improved classification performance compared to an existing one-against-one technique which applies only to the truth membership values.
Highlights
Vagueness is normally expected in real world problems
We found that using two opposite neural networks can improve the classification performance compared to the existing technique that deals only with the truth membership values
The outputs from each pair of networks are represented in the form of a vague output which contains the truth membership, indeterminacy membership, and false membership values
Summary
Vagueness is normally expected in real world problems. For instance, one cannot exactly define how many grains of sand constitute a heap. These values are used to deal with the issue of vagueness Applying both truth and falsity networks can increase diversity in neural network ensembles thereby increasing the performance. Fig. represents the proposed component that consists of a set of input feature vectors, a pair of opposite neural networks (Truth NN and Falsity NN), vagueness estimation, three memberships, and a vague output. In each component, both binary neural networks are trained with the same training data from two classes. After k(k-1)/2 vague outputs are created; a majority vote is applied in order to classify the input feature vector into multiple classes. The class that has the highest number of votes with the minimum average vagueness value will be chosen
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.