The subject matter of the article is the application of supervised machine learning for the task of object class recognition. The goal is enhancing functional efficiency in information-extreme technology (IET) for object class recognition. The tasks to be solved are: to analyse possible ways of increasing the functional efficiency IET approach; implement an ensemble of models that include logistic regression for prioritizing recognition features and an IEI learning algorithm; compare the functional efficiency of the resulting ensemble of models on well-known dataset with classic approach and results of other researchers. The methods: The method is developed within the framework of the functional approach to modelling natural intelligence applied to the problem of object classification. The following results were obtained: The study tries to augment existing IET to support feature prioritization as part of the object class recognition algorithm. The classical information-extreme algorithm treats all input features equivalently important in forming the decisive rule. As a result, the object features with strong correlation are not prioritized by the algorithm's decisive mechanism – resulting in decreasing functional efficiency in exam mode. The proposed approach is solving this problem by applying a two-stage approach. In the first stage the multiclass logistic regression applied to the input training features vectors of objects to be classified – formed the normalized training matrix. To prevent overfitting of the logistic regression, a model the L2(ridge) regularization method was used. On the second stage, the information-extreme method as input takes the result of the first stage. The geometrical parameters of class containers and the control tolerances on the recognition features were considered as the optimization parameters. Conclusions. The proposed approach increases MNIST (Modified National Institute of Standards and Technology) dataset classification accuracy compared with the classic information-extreme method by 26,44%. The proposed approach has a 3.77% lower accuracy compared to neural-like approaches but uses fewer resources in the training phase and allows retraining the model, as well as expanding the dictionary of recognition classes without model retraining.
Read full abstract