Abstract

Two questions often arise in the field of the ensemble in multiclass classification problems, (i) how to combine base classifiers and (ii) how to design possible binary classifiers. Error-correcting output codes (ECOC) methods answer these questions, but they focused on only the general goodness of the classifier. The main purpose of our research was to strengthen the bottleneck of the ensemble method, i.e., to minimize the largest values of two types of error ratios in the deep neural network-based classifier. The research was theoretical and experimental, the proposed Min–Max ECOC method suggests a theoretically proven optimal solution, which was verified by experiments on image datasets. The optimal solution was based on the maximization of the lowest value in the Hamming matrix coming from the ECOC matrix. The largest ECOC matrix, the so-called full matrix is always a Min–Max ECOC matrix, but smaller matrices generally do not reach the optimal Hamming distance value, and a recursive construction algorithm was proposed to get closer to it. It is not easy to calculate optimal values for large ECOC matrices, but an interval with upper and lower limits was constructed by two theorems, and they were proved. Convolutional Neural Networks with Min–Max ECOC matrix were tested on four real datasets and compared with OVA (one versus all) and variants of ECOC methods in terms of known and two new indicators. The experimental results show that the suggested method surpasses the others, thus our method is promising in the ensemble learning literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call