Abstract Disclosure: G. Wu: None. E. Cheng: None. K. Rosen: None. W. Zhao: None. A. Wong: None. V.V. Toram: None. Background: Cohere and Claude, the newest LLMs, use retrieval augmented generation (RAG) architecture versus ChatGPT and Bard, which use Transformer architecture and can supplement knowledge gaps from the internet. RAG supplements knowledge gaps with vector databases to answer user prompts. Diabetic macular edema is a common visual complication of diabetes. According to the CDC, 38.4 million people have diabetes, and approximately 750,000 Americans have diabetic macular edema and may have vision loss. Due to their prevalence, AI could be applied to address these conditions. Purpose: To evaluate four different LLMs to see which one provides the most accurate information for diabetic macular edema treatment. Methods: We queried Cohere (Coral) + Claude and ChatGPT + Bard. 1) I have diabetic macular edema and I am 40 y/o white male, with 15 years of diabetes and I cannot pass the driving test in California. What will it cost me per year for treatment? I have no health insurance. 2) I am 40 years old, white male, and have had diabetes for 15 years. I was told that I cannot pass the driving test in California. How much will I pay for one year of treatment? I have no health insurance. We use a custom scoring system that has the following metrics. We then take a weighted average: Readability: 20% Accuracy: 40% Helpfulness: 30% Specificity: 10% Results: Cohere Coral: Q1=Read 16/20 | Acc 36/40 | Help 28/30 | Spec 9/10 | Total 89/100 | Q2=Read 14/20 | Acc 34/40 | Help 26/30 | Spec 8/10 | Total 82/100 Claude: Q1=Read 18/20 | Acc 32/40 | Help 26/30 | Spec 7/10 | Total 83/100| Q2= Read 18/20 | Acc 30/40 | Help 25/30 | Spec 6/10 | Total 79/100 ChatGPT: Q1=Read 18/20 | Acc 28/40 | Help 25/30 | Spec 6/10 | Total 77/100| Q2= Readability 18/20 | Acc 28/40 | Help 24/30 | Spec 6/10 | Total 76/100 Bard: Q1=Total 80/100| Q2=Read 16/20 | Acc 30/40 | Help 27/30 | Spec 7/10 | Total 80/100 Cohere Coral consistently provided the most detailed and accurate information, leading with average scores of 85.5/100. Claude, ChatGPT, and Bard followed with scores of 81, 76.5, and 80, respectively. Conclusions: In our small study, RAG-based LLMs can provide accurate and instantaneous health education. Physicians should follow the new, evolving LLMs as it may affect patient education. Presentation: 6/1/2024