In the world of image analysis, effectively handling large image datasets is a complex challenge that requires using deep neural networks. Siamese neural networks, known for their twin-like structure, offer an effective solution to image comparison tasks, especially when data volume is limited. This research explores the possibility of enhancing these models by adding supplementary outputs that improve classification and help find specific data features. The article shows the results of two experiments using the Fashion MNIST and PlantVillage datasets, incorporating additional classification, regression, and combined output strategies with various weight loss configurations. The results from the experiments show that for simpler datasets, the introduction of supplementary outputs leads to a decrease in model accuracy. Conversely, for more complex datasets, optimal accuracy was achieved through the simultaneous integration of regression and classification supplementary outputs. It should be noted that the observed increase in accuracy is relatively marginal and does not guarantee a substantial impact on the overall accuracy of the model.