Abstract Background A multi-method, multi-informant approach is crucial for evaluating attention-deficit/hyperactivity disorders (ADHD) in preschool children due to the diagnostic complexities and challenges at this developmental stage. However, most artificial intelligence (AI) studies on the automated detection of ADHD have relied on using a single datatype. This study aims to develop a reliable multimodal AI-detection system to facilitate the diagnosis of ADHD in young children. Methods 78 young children were recruited, including 43 diagnosed with ADHD (mean age: 68.07 ± 6.19 months) and 35 with typical development (mean age: 67.40 ± 5.44 months). Machine learning and deep learning methods were adopted to develop three individual predictive models using electroencephalography (EEG) data recorded with a wearable wireless device, scores from the computerized attention assessment via Conners’ Kiddie Continuous Performance Test Second Edition (K-CPT-2), and ratings from ADHD-related symptom scales. Finally, these models were combined to form a single ensemble model. Results The ensemble model achieved an accuracy of 0.974. While individual modality provided the optimal classification with an accuracy rate of 0.909, 0.922, and 0.950 using the ADHD-related symptom rating scale, the K-CPT-2 score, and the EEG measure, respectively. Moreover, the findings suggest that teacher ratings, K-CPT-2 reaction time, and occipital high-frequency EEG band power values are significant features in identifying young children with ADHD. Conclusions This study addresses three common issues in ADHD-related AI research: the utility of wearable technologies, integrating databases from diverse ADHD diagnostic instruments, and appropriately interpreting the models. This established multimodal system is potentially reliable and practical for distinguishing ADHD from TD, thus further facilitating the clinical diagnosis of ADHD in preschool young children.