ResNet is one of the leading neutral networks that has been widely applied in image classification. This study built a simple baseline network based on the concept of ResNet and then examines how variations in ResNet’s architecture affect its model performance to provide insights for optimizing network design. Firstly, this study investigates the number of fully connected layers, the results show that by reducing the number of fully connected layers significantly decreases the total number of trainable parameters, which in turn reduces the training time. However, this reduction does not lead to a noticeable improvement in the accuracy after convergence. In addition, increasing the number of fully connected layers not only greatly increases the training time but also leads to overfitting on the CIFAR-100 dataset, slightly reducing the training performance. Secondly, this study also analyzes the influence of reducing number of residual basic block. Analysis suggests that reducing the use of Residual Blocks has significantly negatively impacted both accuracy and training time. This may be because the use of Residual Blocks positively affects the network’s ability to learn features from the CIFAR-100 dataset. Finally, this study explores the effect of bigger kernel size of convolution layer in Residual Basic Block. The outcome demonstrates increasing the kernel size in the Residual Blocks significantly improves both training time and accuracy. Additionally, it was observed that in this variant experiment, a definitive convergence has not yet been clearly established, leaving the possibility that accuracy might continue to improve with more training epochs.