Since developing AI procedures demands significant computing resources and time, the implementation of a careful experimental design is essential. The purpose of this study was to investigate factors influencing the development of AI in orthodontics. A total of 162 AI models were developed, with various combinations of sample sizes (170, 340, 679), input variables (40, 80, 160), output variables (38, 76, 154), training sessions (100, 500, 1000), and computer specifications (new vs. old). The TabNet deep-learning algorithm was used to develop these AI models, and leave-one-out cross-validation was applied in training. The goodness-of-fit of the regression models was compared using the adjusted coefficient of determination values, and the best-fit model was selected accordingly. Multiple linear regression analyses were employed to investigate the relationship between the influencing factors. Increasing the number of training sessions enhanced the effectiveness of the AI models. The best-fit regression model for predicting the computational time of AI, which included logarithmic transformation of time, sample size, and training session variables, demonstrated an adjusted coefficient of determination of 0.99. The study results show that estimating the time required for AI development may be possible using logarithmic transformations of time, sample size, and training session variables, followed by applying coefficients estimated through several pilot studies with reduced sample sizes and reduced training sessions.