Accurately classifying the quality of apples is crucial for maximizing their commercial value. Deep learning techniques are being widely adopted for apple quality classification tasks, achieving impressive results. While existing research excels at classifying apple variety, size, shape, and defects, color and deformity analysis remain an under-explored area. Therefore, this study investigates the feasibility of utilizing convolutional neural networks (CNN) to classify the color and deformity of apples based on machine vision technology. Firstly, a custom-assembled machine vision system was constructed for collecting apple images. Then, image processing was performed to extract the largest fruit diameter from the 45 images taken for each apple, establishing an image dataset. Three classic CNN models (AlexNet, GoogLeNet, and VGG16) were employed with parameter optimization for a three-category classification task (non-deformed slice–red apple, non-deformed stripe–red apple, and deformed apple) based on apple features. VGG16 achieved the best results with an accuracy of 92.29%. AlexNet and GoogLeNet achieved 91.66% and 88.96% accuracy, respectively. Ablation experiments were performed on the VGG16 model, which found that each convolutional block contributed to the classification task. Finally, prediction using VGG16 was conducted with 150 apples and the prediction accuracy was 90.50%, which was comparable to or better than other existing models. This study provides insights into apple classification based on color and deformity using deep learning methods.