• Damage classification of milled rice grains is demonstrated using fine-tuned deep CNN models. • High-magnification dataset of 8048 images consisting of seven types of rice grain damages is constructed. • Five state-of-the-art CNN models (EfficientNet-B0, ResNet-50, InceptionV3, MobileNetV2, MobileNetV3) used for damage classification. • EfficientNet-B0 is the best performing CNN model with an overall classification accuracy of 98.32 % • EfficientNet-B0 successfully classifies the chalky damaged rice further into three subclasses. Surface quality evaluation of pre-processed rice grains is a key factor in determining their market acceptance, storage stability, processing quality, and the overall customer approval. On one end the conventional methods of surface quality evaluation are time-intensive, subjective, and inconsistent. On the other end, the current methods are limited to either sorting of healthy rice grains from the damaged ones, without classifying the latter, or focusing on segregating the different types of rice. A detailed classification of damage in milled rice grains has been largely unexplored due to the lack of an extensive labelled image dataset and the application of advanced CNN models thereon; that enables quick, accurate, and precise classification by excelling at end-to-end tasks, minimizing pre-processing, and eliminating the need for manual feature extraction. In this study, a machine vision system is developed to first construct a dataset of 8048 high-magnification (4.5 x) images of damaged rice refractions, that are obtained through the on-field collection. The dataset spans across seven damage classes, namely, healthy, full chalky, chalky discolored, half chalky, broken, discolored, and normal damage. Subsequently, five different state-of-the-art memory efficient Deep-CNN models, namely, EfficientNet-B0, ResNet-50, InceptionV3, MobileNetV2, and MobileNetV3 are adopted and fine-tuned to enable damage classification of milled rice grains. Experimental results show that the EfficientNet-B0 is the best performing model in terms of the accuracy, average recall, precision, and F1-score. It achieves an individual class accuracy of 98.33%, 96.51%, 95.45%, 100%, 100%, 99.26%, and 98.72% for healthy, full chalky, chalky discolored, half chalky, broken, discolored, and normal damage class respectively. The EfficientNet-B0 architecture achieves an overall classification accuracy of 98.37 % with a significantly reduced model size (47 MB) and a small prediction time of 0.122 s and can sub-classify the chalky class further into 3 different classes i.e. , full chalky, half chalky, and chalky discolored. Overall, this study demonstrates the Deep CNN architectures applied to a high-magnification image dataset enables the classification of damaged rice grains with high accuracy, which could be utilized as a tool for better and more objective quality assessment of the damaged rice grains at market and trading locations.