Neural networks have inductive biases owing to the assumptions associated with the selected learning algorithm, datasets, and network structure. Specifically, convolutional neural networks (CNNs) are known for their tendency to exhibit textural biases. This bias is closely related to image classification accuracy. Aligning the model’s bias with the dataset’s bias can significantly enhance performance in transfer learning, leading to more efficient learning. This study aims to quantitatively demonstrate that increasing shape bias within the network by varying kernel sizes and dilation rates improves accuracy on shape-dominant data and enables efficient learning with less data. Furthermore, we propose a novel method for quantitatively evaluating the balance between texture bias and shape bias. This method enables efficient learning by aligning the biases of the transfer learning dataset with those of the model. Systematically adjusting these biases allows CNNs to better fit data with specific biases. Compared to the original model, an accuracy improvement of up to 9.9% was observed. Our findings underscore the critical role of bias adjustment in CNN design, contributing to developing more efficient and effective image classification models.
Read full abstract