BackgroundThis study assesses the reliability of ChatGPT as a source of information on asthma, given the increasing use of AI-driven models for medical information. Prior concerns about misinformation on atopic diseases in various digital platforms underline the importance of this evaluation. ObjectiveWe aimed to evaluate the scientific reliability of ChatGPT as a source of information on asthma. MethodsThe study involved analyzing ChatGPT's responses to 26 asthma-related questions, each followed by a follow-up question. These encompassed definition/risk factors, diagnosis, treatment, lifestyle factors, and specific clinical inquiries. Medical professionals specialized in allergic and respiratory diseases independently assessed the responses using a 1-5 accuracy scale. ResultsApproximately 81% of the responses scored 4 or higher, suggesting a generally high accuracy level. However, 5 responses scored >3, indicating minor potentially harmful inaccuracies. The overall median score was 4. Fleiss multi-rater kappa showed moderate agreement among raters. ConclusionChatGPT generally provides reliable asthma-related information, but its limitations, such as lack of depth in certain responses and inability to cite sources or update in real-time, were noted. While it shows promise as an educational tool, it should not be a substitute for professional medical advice. Future studies should explore its applicability for different user demographics and compare it with newer AI models.