ObjectivesIn orthognatic surgery, one of the primary determinants for reliable three-dimensional virtual surgery planning (3D VSP) and an accurate transfer of 3D VSP to the patient in the operation room is the condylar seating. Incorrectly seated condyles would primarily affect the accuracy of maxillary-first bimaxillary osteotomies as the maxillary repositioning is dependent on the positioning of the mandible in the cone-beam computed tomography (CBCT) scan. This study aimed to develop and validate a novel tool by utilizing a deep learning algorithm that automatically evaluates the condylar seating based on CBCT images as a proof of concept.Materials and methodsAs a reference, 60 CBCT scans (120 condyles) were labeled. The automatic assessment of condylar seating included three main parts: segmentation module, ray-casting, and feed-forward neural network (FFNN). The AI-based algorithm was trained and tested using fivefold cross validation. The method’s performance was evaluated by comparing the labeled ground truth with the model predictions on the validation dataset.ResultsThe model achieved an accuracy of 0.80, positive predictive value of 0.61, negative predictive value of 0.9 and F1-score of 0.71. The sensitivity and specificity of the model was 0.86 and 0.78, respectively. The mean AUC over all folds was 0.87.ConclusionThe innovative integration of multi-step segmentation, ray-casting and a FFNN demonstrated to be a viable approach for automating condylar seating assessment and have obtained encouraging results.Clinical relevanceAutomated condylar seating assessment using deep learning may improve orthognathic surgery, preventing errors and enhancing patient outcomes in maxillary-first bimaxillary osteotomies.