Objective: To develop a multi-classification orthodontic image recognition system using the SqueezeNet deep learning model for automatic classification of orthodontic image data. Methods: A total of 35 000 clinical orthodontic images were collected in the Department of Orthodontics, Capital Medical University School of Stomatology, from October to November 2020 and June to July 2021. The images were from 490 orthodontic patients with a male-to-female ratio of 49∶51 and the age range of 4 to 45 years. After data cleaning based on inclusion and exclusion criteria, the final image dataset included 17 453 face images (frontal, smiling, 90° right, 90° left, 45° right, and 45° left), 8 026 intraoral images [frontal occlusion, right occlusion, left occlusion, upper occlusal view (original and flipped), lower occlusal view (original and flipped) and coverage of occlusal relationship], 4 115 X-ray images [lateral skull X-ray from the left side, lateral skull X-ray from the right side, frontal skull X-ray, cone-beam CT (CBCT), and wrist bone X-ray] and 684 other non-orthodontic images. A labeling team composed of orthodontic doctoral students, associate professors, and professors used image labeling tools to classify the orthodontic images into 20 categories, including 6 face image categories, 8 intraoral image categories, 5 X-ray image categories, and other images. The data for each label were randomly divided into training, validation, and testing sets in an 8∶1∶1 ratio using the random function in the Python programming language. The improved SqueezeNet deep learning model was used for training, and 13 000 natural images from the ImageNet open-source dataset were used as additional non-orthodontic images for algorithm optimization of anomaly data processing. A multi-classification orthodontic image recognition system based on deep learning models was constructed. The accuracy of the orthodontic image classification was evaluated using precision, recall, F1 score, and confusion matrix based on the prediction results of the test set. The reliability of the model's image classification judgment logic was verified using the gradient-weighted class activation mapping (Grad-CAM) method to generate heat maps. Results: After data cleaning and labeling, a total of 30 278 orthodontic images were included in the dataset. The test set classification results showed that the precision, recall, and F1 scores of most classification labels were 100%, with only 5 misclassified images out of 3 047, resulting in a system accuracy of 99.84%(3 042/3 047). The precision of anomaly data processing was 100% (10 500/10 500). The heat map showed that the judgment basis of the SqueezeNet deep learning model in the image classification process was basically consistent with that of humans. Conclusions: This study developed a multi-classification orthodontic image recognition system for automatic classification of 20 types of orthodontic images based on the improved SqueezeNet deep learning model. The system exhibitted good accuracy in orthodontic image classification.