Abstract Introduction Dyspareunia, or female sexual pain, has an estimated prevalence of up to 18% worldwide. It is a complex condition with various physical and psychological etiologies, making it a challenge for patients to understand. Physicians spend significant time on patient education and providing educational materials to patients. ChatGPT is an artificial intelligence (AI) language model that generates human-like conversational text. Objective The aim of our study is to evaluate the understandability, quality, and readability of written patient educational material on topics related to dyspareunia generated by ChatGPT in English. Methods Five existing and accessible patient education handouts related to the topic of dyspareunia were identified and used as reference material. We prompted ChatGPT to create patient education leaflets on dyspareunia, female sexual dysfunction, vaginismus, and vulvodynia. The DISCERN instrument and Patient Education Materials Assessment (PEMAT) were used to assess each leaflet’s quality, understandability, and actionability. Readability of the patient education materials was assessed with the Flesch-Kincaid Grade (FKG) level. Results ChatGPT created four different patient education materials. The document created on “dyspareunia” scored 90% on understandability 75% on actionability, 42 on DISCERN, and 14.4 FKG level compared to the UpToDate (88%, 50%, 50, and 7.6) and the International Urogynecologic Association (IUGA) (100%, 0%, 80, and 13.6) on the same subject. The ChatGPT handout on “female sexual dysfunction” scored 70% on understandability, 50% on actionability, 37 on DISCERN, and 12.9 FKG Level compared to the American Urogynecologic Society (AUGS) (90%, 50%, 52, and 13) handout on the same subject. The ChatGPT handout on “vaginismus” scored 81% on understandability, 25% on actionability, 41 on DISCERN, and 11.3 FKG Level compared to the International Society for the Study of Vulvovaginal Disease (ISSVD) (90%, 0%, 44, 11.4) handout on the same subject. The ChatGPT handout on “vulvodynia” scored 81% on understandability, 25% on actionability, 36 on DISCERN, and 11.5 FKG Level compared to the ISSVD (90%, 25%, 46, 10.5) handout on the same subject. Conclusions The average PEMAT understandability scores suggest that the reference handouts (91.6%) are more understandable than the AI-generated materials (80.5%). The ChatGPT-generated handouts also fell short in quality according to the average DISCERN scores (39 vs. 54.4). The average FKG level for the AI-generated information was 12.525 compared to 11.22 for the reference patient education materials. Overall, ChatGPT was not able to create patient education handouts on topics related to dyspareunia that were as understandable or readable as the existing handouts. The ChatGPT-generated handouts did, however, score higher in actionability on average (50) than the reference handouts (25), which may give patients a false sense of trust in these AI-generated materials. Our study indicates that ChatGPT is not yet an ideal tool for the creation of patient education materials, nor is it a reliable information source for patients. Disclosure No.