To investigate the current decision-making capabilities of 6 different artificial intelligence (AI) models by assessing their refractive surgery recommendations (laser in-situ keratomileusis [LASIK] or photorefractive keratectomy [PRK]) for a theoretical patient with a history of keloid formation. Claude-2 (Anthropic, 2023), GPT-4 (OpenAI, 2023), GPT-3.5 (OpenAI, 2022), Gemini 1.0 (Google DeepMind, 2023), Microsoft Copilot (Microsoft AI, 2023), and Google-PaLM (Google AI, 2022) underwent three systematic queries to determine the most appropriate surgical plan (LASIK or PRK) for a theoretical patient with an increasing manifest refraction of -3.50, -5.00, and -7.00 diopters (D) in both eyes, an uncomplicated ocular examination, and history of keloid formation. They were then tasked with providing published scientific references to support their responses. The AI models' recommendations were compared to those of a group of 6 experienced ophthalmologists, serving as a benchmark. The group of ophthalmologists unanimously recommended LASIK (6/6 ophthalmologists), in contrast to the unanimous initial recommendation for PRK from the AI models (6/6 models). Of the 42 references provided by the AI models, 55% were fictitious and 45% were authentic. Only 1 of the 6 models altered its initial recommendation to LASIK when presented with the same patient with a history of keloid formation but with increasing severity of myopia (-3.50 to 5.00 to 7.00 D). It is evident that current AI models lack the critical-thinking abilities required to accurately analyze and assess apparent risk factors in clinical scenarios, such as the risk of corneal haze after PRK at higher levels of myopia, particularly in cases with a history of keloid formation. [J Refract Surg. 2024;40(8):e533-e538.].