Image classification is one of the most fundamental applications in the field of computer vision, which continues to arouse enormous interest. People can recognize a large number of objects in images with little effort, even though a number of their characteristics can change. Objects can be recognized even when their detection is partially difficult. At the same time, algorithmic description of the recognition problem for computer implementation is still an urgent problem. Existing methods for solving this problem are effective only for certain cases (for example, for geometric objects, human faces, road signs, printed or handwritten symbols) and only under certain conditions. A model designed to identify and classify objects must be able to identify their location, as well as distinguish between various features of objects, such as edges, corners, color differences, etc. Deep convolutional neural networks have demonstrated the best performance when working with images, which sometimes exceeds the capabilities of human vision. However, even with this significant improvement, there are still some issues with overfitting and vanishing gradient. To solve them, some well-known methods are used: data augmentation, batch normalization and dropout; modern classification models do not perform color space conversion of original RGB images. The study of the use of different color spaces in the task of image classification is one of the topical problems related to deep learning, and it determines the relevance of this study. If this problem is solved, this will improve performance of the models used.