Background: To evaluate the accuracy of Chat Generative Pre-Trained Transformer (ChatGPT), a large language model (LLM), to assist in diagnosing neuro-ophthalmic diseases based on case reports. Methods: We selected 22 different case reports of neuro-ophthalmic diseases from a publicly available online database. These cases included a wide range of chronic and acute diseases commonly seen by neuro-ophthalmic subspecialists. We inserted each case as a new prompt into ChatGPTs (GPT-3.5 and GPT-4) and asked for the most probable diagnosis. We then presented the exact information to 2 neuro-ophthalmologists and recorded their diagnoses, followed by comparing responses from both versions of ChatGPT. Results: GPT-3.5 and GPT-4 and the 2 neuro-ophthalmologists were correct in 13 (59%), 18 (82%), 19 (86%), and 19 (86%) out of 22 cases, respectively. The agreements between the various diagnostic sources were as follows: GPT-3.5 and GPT-4, 13 (59%); GPT-3.5 and the first neuro-ophthalmologist, 12 (55%); GPT-3.5 and the second neuro-ophthalmologist, 12 (55%); GPT-4 and the first neuro-ophthalmologist, 17 (77%); GPT-4 and the second neuro-ophthalmologist, 16 (73%); and first and second neuro-ophthalmologists 17 (77%). Conclusions: The accuracy of GPT-3.5 and GPT-4 in diagnosing patients with neuro-ophthalmic diseases was 59% and 82%, respectively. With further development, GPT-4 may have the potential to be used in clinical care settings to assist clinicians in providing quick, accurate diagnoses of patients in neuro-ophthalmology. The applicability of using LLMs like ChatGPT in clinical settings that lack access to subspeciality trained neuro-ophthalmologists deserves further research.