PurposeThis study aims to seek an optimized deep learning model for differentiating non-traumatic brachial plexopathy from routine MRI scans. Materials and methodsThis retrospective study collected patients through the electronic medical records (EMR) or pathological reports at Mayo Clinic and underwent BP MRI from January 2002 to December 2022. Using sagittal T1, fluid-sensitive and post-gadolinium images, a radiology panel selected BP’s region of interest (ROI) to form 3 dimensional volumes for this study. We designed six deep learning schemes to conduct BP abnormality differentiation across three MRI sequences. Utilizing five prestigious deep learning networks as the backbone, we trained and validated these models by nested five-fold cross-validation schemes. Furthermore, we defined a ’method score’ derived from the radar charts as a quantitative indicator as the guidance of the preference of the best model. ResultsThis study selected 196 patients from initial 267 candidates. A total of 256 BP MRI series were compiled from them, comprising 123 normal and 133 abnormal series. The abnormal series included 4 sub-categories, et al. breast cancer (22.5 %), lymphoma (27.1 %), inflammatory conditions (33.1 %) and others (17.2 %). The best-performing model was produced by feature merging mode with triple MRI joint strategy (AUC, 92.2 %; accuracy, 89.5 %) exceeding the multiple channel merging mode (AUC, 89.6 %; accuracy, 89.0 %), solo channel volume mode (AUC, 89.2 %; accuracy, 86.7 %) and the remaining. Evaluated by method score (maximum 2.37), the feature merging mode with backbone of VGG16 yielded the highest score of 1.75 under the triple MRI joint strategy. ConclusionDeployment of deep learning models across sagittal T1, fluid-sensitive and post-gadolinium MRI sequences demonstrated great potential for brachial plexopathy diagnosis. Our findings indicate that utilizing feature merging mode and multiple MRI joint strategy may offer satisfied deep learning model for BP abnormalities than solo-sequence analysis.